Search Results

Search found 16987 results on 680 pages for 'second'.

Page 292/680 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • How to copy only VM changes in VMware to another physical machine

    - by Paul
    How can I use VMware Workstation to copy only changes to a VM to another physical machine, rather than recopying the entire VM every time? Can I just copy the snapshot files, if the second physical machine starts from a pristine copy of the VM? Would it be best to create a linked clone against the original VM, and then copy the linked clone directory each time? Does that even work? (I'm assuming I'll have to change the path in the VM's metadata file, since the paths will be different. I'm assuming I can't clone a linked clone w/o cloning the base.) Background: I and another developer with VMware Workstation are using VM's for systems development. I have to work from a home desktop. Transporting a multi-gigabyte VM either direction takes several hours at best. Downloading it is easiest, but that takes an hour to get it on a DMZ machine for download and then many hours to actually download.

    Read the article

  • How to make an Actor follow my finger

    - by user48352
    I'm back with another question that may be really simple. I've a texture drawn on my spritebatch and I'm making it move up or down (y-axis only) with Libgdx's Input Handler: touchDown and touchUp. @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { myWhale.touchDownY = screenY; myWhale.isTouched = true; return true; } @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { myWhale.isTouched = false; return false; } myWhale is an object from Whale Class where I move my texture position: public void update(float delta) { this.delta = delta; if(isTouched){ dragWhale(); } } public void dragWhale() { if(Gdx.input.getY(0) - touchDownY < 0){ if(Gdx.input.getY(0)<position.y+height/2){ position.y = position.y - velocidad*delta; } } else{ if(Gdx.input.getY(0)>position.y+height/2){ position.y = position.y + velocidad*delta; } } } So the object moves to the center of the position where the person is pressing his/her finger and most of the time it works fine but the object seems to take about half a second to move up or down and sometimes when I press my finger it wont move. Maybe there's another simplier way to do this. I'd highly appreciate if someone points me on the right direction.

    Read the article

  • One codebase - lots of hosted services (similar to a basecamp style service) - planning structure

    - by RickM
    We have built a service (PHP Based) for a client, and are now looking to offer it to other clients as a hosted service. For this example, think of it like a hosted forum service, where a client signs up on our site, and is given a subdomain or can use their own domain, and the code picks up the domain, checks it against a 'master' users table, and then loads the content as needed. I'm trying to work out the best way of handling multiple clients. At the moment I can only think of two options that would work: Option 1 - Have 1 set of database tables, but on each table have a column called 'siteid' - this would mean every query has to check the siteid. This would effectively work with just 1 codebase, and 1 database. Option 2 - Have 1 'master' database with all the core stuff such as the client details and their domain. Then when the systen checks the domain, it pulls the clients database details (username/password/dbname) from a table, and loads a second database. The issue here is security of the mysql server details, however it does have the benefit that they are running their own database instead of sharing one. Which option would I be better taking here, and why? Ideally I want it to be fairly easy to convert the 'standalone' script to the 'multi-domain' script as we're on a tight deadline.

    Read the article

  • Problem opening files with Gvim on Windows 7

    - by Oscar Duignan
    Just installed gvim on windows 7 for the first time, and I'm having a problem opening some files. When I open files vim seems to flash up a cmd window for a few second before closing it (too quickly to make out the contents,) and I end up with a C:/Program folder, and a /Files/vim/vimfiles/doc/ in the directory of the file I just opened. I can see it's trying to access C:/Program Files/vim/vimfiles/doc, which is where vim is installed, however it's choking on the space, and I'm not familiar enough with gvim to work out why. Any and all ideas are greatly appreciated.

    Read the article

  • Windows 2008 VPS always crashes when out of disk space

    - by Pickels
    Hello, I am renting a Windows server 2008 dc SP2 VPS for hosting my Asp.Net projects. Now for the second time this month my VPS ran out of disk space. The first time it was a log file that got to big and yesterday it was my mistake for uploading a website without noticing the lack of space on my VPS. Now the side effect this has is that my VPS corrupts some files when trying to write them. Last time it was Plesk that stopped working yesterday it was IIS. So I was wondering is this normal behavior? I called my service provider to ask if they could restore a back-up and to ask if this is normal and they ensured me it was. I am not trying to blame them and I know it's mostly my fault for not monitoring my VPS better or for not setting better defaults.

    Read the article

  • I get regular power surges and my UPS ticks a lot due to a welding plant. Is this good or bad?

    - by EApubs
    In our home, there is a mechanic who often use a welding plant. When he use it, I think we get a power surge. My UPS ticks madly (not beep, it ticks). Sometimes, I even lose power to the keyboard. When typing, some keys get missing. 1) My question is, is it good for the computer? The UPS claimed to have surge protection. But isn't it working? What should I do to protect my PC? 2) The second question is, I also have a broadband router which is not connected to the UPS. Will it be effected?

    Read the article

  • Data Center Modernization: Harness the power of Oracle Exalogic and Exadata with PeopleSoft

    - by Michelle Kimihira
    Author: Latha Krishnaswamy, Senior Manager, Exalogic Product Management   Allegis Group - a Hanover, MD-based global staffing company is the largest privately held staffing company in the United States with more than 10,000 internal employees and 90,000 contract employees. Allegis Group is a $6+ billion company, offering a full range of specialized staffing and recruiting solutions to clients in a wide range of industries.   The company processes about 133,000 paychecks per week, every week of the year. With 300 offices around the world and the hefty task of managing HR and payroll, the PeopleSoft system at Allegis  is a mission-critical application. The firm is in the midst of a data center modernization initiative. Part of that project meant moving the company's PeopleSoft applications (Financials and HR Modules as well as Custom Time & Expense module) to a converged infrastructure.     The company ran a proof of concept with four different converged architectures before deciding upon Exadata and Exalogic as the platform of choice.   Performance combined with High availability for running mission-critical payroll processes drove this decision.  During the testing on Exadata and Exalogic Allegis applied a particular (11-F) tax update in production environment. What job ran for roughly six hours completed in less than 1.5 hours. With additional tuning the second run of the Tax update 11-F reduced to 33 minutes - a 90% improvement!     Not only that, the move will help the company save money on middleware by consolidating use of Oracle licensing in a single platform.   Summary With a modern data center powered by Exalogic and Exadata to run mission-critical PeopleSoft HR and Financial Applications, Allegis is positioned to manage business growth and improve employee productivity. PeopleSoft applications run on engineered systems platform minimizing hardware and software integration risks. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Why, when on Kubuntu I lose internet connection, am I unable to reconnect?

    - by Jonathan
    Using Kubuntu 11.10. Sony Vaio computer. Network controller: Intel Corporation WiFi Link 5100. If I connect to a wireless network, and the signal drops, then I am unable to connect to any network without a reboot. I can assure that the issue has nothing to do with the computer going to sleep, as I have experienced the above while using my computer continuously. Here is exactly what happens: Connect to network (at University, where the connection is not so great). The connection is broken There are three other possible networks available, but none of them can be connected to. I have tried off and on sometimes for hours. I am always able to reestablish a connection after a reboot. I can only think of two explanations. The first is that a temp file is corrupted when the internet connection is abruptly dropped. The second is that my computer actually corrupts something before the loss of internet connection, which causes the loss in signal, and inability to reconnect. However, I am not confident that my explanations are complete, nor do I have any idea how to test these things.

    Read the article

  • Operating System not Found after installing Ubuntu in a Sony Vaio

    - by diego8arock
    I just bought a Sony Vaio SVS15115FLB that came with Windows 7, after enjoying the PC graphic power for a little, I decided it was time to install Ubuntu 12.04. First, I inserted a USB stick, reboot, press F11 but a message saying that no OS was found on the USB, so then I used a live CD. It booted fine and I installed Ubuntu, then when it was time to restart the PC, it didn't boot to GRUB but it went straight to Windows and it began an startup error and was looking for a solution, after it was done, it restarted and then it booted again to Windows and to the same start up error solution thing. I freaked out, so I booted again the Ubuntu live CD, and installed Ubuntu over everything, after it installed I rebooted and then a message appeared saying Operating system Not Found, and I have no idea why. So I Googled again and found this post on Boot Partition, I did everything exactly on that post, but it didn't work (by the way, this was the message): The boot files of [Ubuntu 12.04 LTS] are far from the start of the disk. Your BIOS may not detect them. You may want to retry after creating a /boot partition (EXT4, >200MB, start of the disk). This can be performed via tools such as gParted. Then select this partition via the [Separate /boot partition:] option of [Boot Repair]. It appeared the first time, then I did it all again and then it was gone. I rebooted and nothing, the same Operating System not found message appeared. So I decided to create a partition for Windows, hoping for something, but the message still appears. I really have no idea what to do, but there is something odd, if I insert the USB stick containing Ubuntu 11.10, the message that says that there is no OS in the PendDrive flashes for a fraction of a second and the boot straight to Ubuntu 12.04 without problems (and booted to Windows when I installed it, ignoring Ubuntu), right now I'm using it like that, but its pretty annoying. Can anyone advise me how to fix this? I'm no expert on this kind of things (boot, GRUB, recovery and stuff like that).

    Read the article

  • Conflict resolution for two-way sync

    - by K.Steff
    How do you manage two-way synchronization between a 'main' database server and many 'secondary' servers, in particular conflict resolution, assuming a connection is not always available? For example, I have an mobile app that uses CoreData as the 'database' on the iOS and I'd like to allow users to edit the contents without Internet connection. In the same time, this information is available on a website the devices will connect to. What do I do if/when the data on the two DB servers is in conflict? (I refer to CoreData as a DB server, though I am aware it is something slightly different.) Are there any general strategies for dealing with this sort of issue? These are the options I can think of: 1. Always use the client-side data as higher-priority 2. Same for server-side 3. Try to resolve conflicts by marking each field's edit timestamp and taking the latest edit Though I'm certain the 3rd option will open room for some devastating data corruption. I'm aware that the CAP theorem concerns this, but I only want eventual consistency, so it doesn't rule it out completely, right? Related question: Best practice patterns for two-way data synchronization. The second answer to this question says it probably can't be done.

    Read the article

  • Strange noise from my laptop when it is closed

    - by gotqn
    I am fan of powerfull PCs with huge wide screen monitors but I have bought an laptop as gift to my parents. This is the description: "second-handed" lenovo (and there is a sign "ThinkPad" Intel Core Duo CPU T6570 2.10 GHz 64 bits Windows 7 Ultimate and the issue is that when the laptop is turn on and then closed, after a interval of time (I am not sure how much exactly but I believe it is several hours) a strange noise come from the laptop. The noise is something like a "start-up" noise of old computers. I first thought this is some hardware issue but everything seems to work fine. Has anyone an idea what can cause this type of noise or a what can I do in order to track the problem down? I am not sure if this is a important thing, but the laptop is always connected to the power. Note: I have just noticed that the sound is generated again when the laptop is closed and then opened. Note: This is link to the audio - http://yourlisten.com/channel/content/16934332/WindowsStrangeNoise

    Read the article

  • Apache SSL for login and NON-SSL for everything else (.htacces)

    - by The Devil
    Hey I've almost figured it out on my own but there's something I'm missing. I want to set a couple of directories and files to require SSL and everything else that's not related to those files and dirs to point back to http. So far I have this: RewriteEngine on RewriteBase / # Force ssl for login & admin RewriteCond %{HTTPS} !on RewriteRule ^/?(admin(.*)|login\.php)$ https://%{SERVER_NAME}/$1 [R,NC,L] # Force non-ssl for others RewriteCond %{HTTPS} on RewriteRule ^/?(admin(.*)|login\.php)$ http://%{SERVER_NAME}/$1 [R,NC,L] I'm sure I'm doing something wrong but I just can't figure it out.... The first condition works perfect - whenever I access login.php or /admin/ it points to https. But the second one doesn't... Where have I mistaken ? Thanks in advance!

    Read the article

  • ALPS touchpad stops working after reboot

    - by user58289
    I recently upgraded to 12.04 LTS. My Compaq Presario CQ-40 324la touchpad worked after first restarting after installation. But after restarting a second time, the touchpad is completely disabled, without having changed anything in the system. I've tried solutions but haven't had good results. The applications I've installed (fro "solutions" are: Pointing Devices Synaptiks Dconf-Tools I've also tried updating GRUB, adding this line: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i8042.nomux") I tried these commands in a terminal. Neither has worked. sudo modprobe -r psmouse sudo modprobe psmouse proto=imps El Touchpad ALPS se desactiva al reiniciar Recién he actualizado a Ubuntu 12.04LTS y a mi parecer ha mejorado mucho con respecto a 11.10... Pero ese no es mi punto ahora. El Touchpad funcionó correctamente luego del primer reinicio después de la instalación. Pero luego de volver a reiniciar, el Touchpad queda totalmente deshabilitado sin haber hecho cambio alguno en el sistema. He probado las soluciones en su mayoría y no he tenido buenos resultados. Las aplicaciones que he instalado (de las "soluciones") son: -Dispositivos apuntadores. -Synaptiks -DConf-Tools Y he intentado también actualizando el grub (añadiéndole una linea así: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i8042.nomux") Intenté en la terminal con el comando: sudo modprobe -r psmouse sudo modprobe psmouse proto=imps Ninguno me ha resultado bien. La laptop es Compaq Presario CQ-40 324la

    Read the article

  • Make IP Address point to webroot instead of virtual hosts' documentroot

    - by Reuben L.
    I used to have a one-to-one domain name and IP. Recently I've paid for a second domain name and decided to host it on the same box and IP. As such, I added virtualhosts to point each domain name to a different document root (i.e. /var/www/webbie1 and /var/www/webbie2). The question I have is, can I still make the IP, e.g. http://XXX.XXX.XXX.XXX, point to the webroot, i.e. /var/www/? If so, how do I go about doing it? For a fuller picture, the box is on an Ubuntu server OS and I'm using apache2 as the app server. the changes I made to enable to virtual hosts were in the apache2.conf file with the <VirtualHost [IP address]> ... </VirtualHost> tags. Thanks.

    Read the article

  • Java EE @ Devoxx UK

    - by delabassee
    Devoxx UK is taking place next week (12th and 13th June) in London. As with any Devoxx conference, this UK edition will have a nice mix of content, an impressive list of speakers and obviously Java EE will be well will covered too:  Apache TomEE, Java EE Web Profile and more on Tomcat (David Blevins) Myths, Tales and Voodoo - About Java EE and Testing (Adam Bien) 50 new features of Java EE 7 (Antonio Goncalves & Arun Gupta) Java EE 7 Hands-on Lab (Arun Gupta) In addition, there will be 2 BoF related to Java EE on Thursday evening, the first BoF will be about the Java EE platform and the second one will be about the Java EE Reference Implementation, i.e. GlassFish. I will participate in the Java EE Community BoF where will discuss Java EE general but with all recent activities, I suspect that a large portion of the BoF will spent on discussing the current plans for Java EE 8.  Right after and in the same room, I will join Steve Millidge of C2B2 for the GlassFish is here to stay! BoF. The goal is to discuss on GlassFish, the current status, the plans for the next release, how the community can contributes, etc. It should be mentioned that attending those BoFs is completely free, just make sure to register here.  So if you are in London next week, mind the Geek and see you at Devoxx UK!

    Read the article

  • Varnish and plesk

    - by Raphaël
    I have an e-commerce site with 300,000 products and 20,000 categories. It is slow and currently in production. I decided to install Varnish to speed up. The trouble is that during installation, I got a Guru Meditation. Since the site is in production, I am not allowed to leave this error more than a second, thinking to have made an enormous stupidity. I followed the following tutorial: http://www.euperia.com/linux/setting-up-varnish-with-apache-tutorial I'm sure I followed all without error. I say that there may be a specific configuration with plesk. Has anyone already installed Varnish on a ubuntu 10.04 server with plesk 10? Does anyone have a better resource? I know it is "very vague" as an error, but maybe some of you have had this problem. Sincerely,

    Read the article

  • How much HDD space would I need to cache the web while respecting robot.txts?

    - by Koning Baard XIV
    I want to experiment with creating a web crawler. I'll start with indexing a few medium sized website like Stack Overflow or Smashing Magazine. If it works, I'd like to start crawling the entire web. I'll respect robot.txts. I save all html, pdf, word, excel, powerpoint, keynote, etc... documents (not exes, dmgs etc, just documents) in a MySQL DB. Next to that, I'll have a second table containing all restults and descriptions, and a table with words and on what page to find those words (aka an index). How much HDD space do you think I need to save all the pages? Is it as low as 1 TB or is it about 10 TB, 20? Maybe 30? 1000? Thanks

    Read the article

  • Tools for analyzing performance of SQL Server/Express?

    - by Adam Crossland
    The application that I have customized and continue to support for my client is seeing dramatic performance problems in the field. Simple queries on rather small datasets take over a minute when I would expect them to complete with sub-second times. My current theory is that SQL Server Express 2005 is too limited for the rather non-trivial demands being made of it, but I am not sure how to get about gathering data that I can use to either prove my point or allow me to move on to finding another cause. Can anyone point me toward some tools that would allow me to analyze the load on this database? Information such as simultaneous connections, execution times of individual queries, memory usage, heck just any profiling data at all would be a help. Many thanks.

    Read the article

  • Resizing partition in Windows 7: how long does it take?

    - by PaulJ
    Okay, maybe this is me worrying about nothing, but... I have a 500 Gb. external drive where I want to create a second partition. I plug it into my Windows 7 box, use Disk Manager and pick the "Shrink Volume" option. It says that the maximum amount to shrink is around 150 Gb. I hit "OK" and it starts working... and it's been going on for about half an hour. The light of the external HD is constantly working. Disk Manager is greyed out and has the "does not respond" message on the top bar; basically, it's behaving as a non-responding application. Is this normal for a drive of this size, or did the application hang? How long would it typically take for a drive like this to resize its partition?

    Read the article

  • best way to host multiple wordpress site on single vps [migrated]

    - by Ben
    Not sure if this is webmaster or a WordPress question, it's a bit half and half, sorry if I'm posting in the wrong place. Without using Multi-Site or installing new WordPress CMS' in second-level domains, what's the best way to get multiple WordPress installs running on my VPS (running Linux powered CentOS 6 with WHM and cPanel)? It's currently working but only by setting the permalinks option to the default setting, so the URLs aren't human-friendly. I have come across something called WPSiteStack, though I'd really rather not go down this route. Long story short, I need the following: Seperate installs so one core / theme / plugin update doesn't affect all sites and increases security of all sites; 'Pretty' permalinks; Each WordPress install must be in the root of it's own domain to ensure that I can accurately measure my clients' quotas; It may also be worth noting that some functions within each install use the $_SERVER['DOCUMENT_ROOT'] and $_SERVER['HOST'] variables. I have already edited the httpd-vhosts.conf, httpd.conf and .htaccess files but this hasn't made any changes. So any ideas what I'm missing or doing wrong? Any help is much appreciated.

    Read the article

  • Control convention for circular movement?

    - by Christian
    I'm currently doing a kind of training project in Unity (still a beginner). It's supposed to be somewhat like Breakout, but instead of just going left and right I want the paddle to circle around the center point. This is all fine and dandy, but the problem I have is: how do you control this with a keyboard or gamepad? For touch and mouse control I could work around the problem by letting the paddle follow the cursor/finger, but with the other control methods I'm a bit stumped. With a keyboard for example, I could either make it so that the Left arrow always moves the paddle clockwise (it starts at the bottom of the circle), or I could link it to the actual direction - meaning that if the paddle is at the bottom, it goes left and up along the circle or, if it's in the upper hemisphere, it moves left and down, both times toward the outer left point of the circle. Both feel kind of weird. With the first one, it can be counter intuitive to press Left to move the paddle right when it's in the upper area, while in the second method you'd need to constantly switch buttons to keep moving. So, long story short: is there any kind of existing standard, convention or accepted example for this type of movement and the corresponding controls? I didn't really know what to google for (control conventions for circular movement was one of the searches I tried, but it didn't give me much), and I also didn't really find anything about this on here. If there is a Question that I simply didn't see, please excuse the duplicate.

    Read the article

  • Can I use an nVidia Geforce 9600GT for triple-monitor?

    - by John Rudy
    My work PC has (as you can guess from the title) a single Geforce 9600GT card. According to the spec page for this card, both DVI ports are Dual-Link DVI. Both are currently in use on Dell 19" 1280x1024 screens. I'm running Windows 7, with Aero enabled, for what that's worth. What I'm wondering is if I can use a standard DVI splitter cable to drive a third screen from one of the ports -- specifically a Samsung 22" 1680x1050. I'm really outgrowing the two-screen setup, and am strongly considering bringing in my home monitor (which is almost never used) to bolster my usable space. If I can't, then my alternative is to replace one of these Dells with that Samsung, to give me at least a "little" more space. (Before you ask, there's probably no chance I can get my company to spring for a second video card and third monitor. Even if I could, it would not be politically expedient, as a number of my other coworkers are already pretty jealous at my setup.)

    Read the article

  • SSIS Catalog: How to use environment in every type of package execution

    - by Kevin Shyr
    Here is a good blog on how to create a SSIS Catalog and setting up environments.  http://sqlblog.com/blogs/jamie_thomson/archive/2010/11/13/ssis-server-catalogs-environments-environment-variables-in-ssis-in-denali.aspx Here I will summarize 3 ways I know so far to execute a package while using variables set up in SSIS Catalog environment. First way, we have SSIS project having reference to environment, and having one of the project parameter using a value set up in the environment called "Development".  With this set up, you are limited to calling the packages by right-clicking on the packages in the SSIS catalog list and select Execute, but you are free to choose absolute or relative path of the environment. The following screenshot shows the 2 available paths to your SSIS environments.  Personally, I use absolute path because of Option 3, just to keep everything simple for myself. The second option is to call through SQL Job.  This does require you to configure your project to already reference an environment and use its variable.  When a job step is set up, the configuration part will require you to select that reference again.  This is more useful when you want to automate the same package that needs to be run in different environments. The third option is the most important to me as I have a SSIS framework that calls hundreds of packages.  The main part of the stored procedure is in this post (http://geekswithblogs.net/LifeLongTechie/archive/2012/11/14/time-to-stop-using-ldquoexecute-package-taskrdquondash-a-way-to.aspx).  But the top part had to be modified to include the logic to use environment reference. CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog] @PackageName NVARCHAR(255) , @ProjectFolder NVARCHAR(255) , @ProjectName NVARCHAR(255) , @AuditKey INT , @DisableNotification BIT , @PackageExecutionLogID INT , @EnvironmentName NVARCHAR(128) = NULL , @Use32BitRunTime BIT = FALSE AS BEGIN TRY DECLARE @execution_id BIGINT = 0; -- Create a package execution IF @EnvironmentName IS NULL BEGIN   EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @use32bitruntime=@Use32BitRunTime; END ELSE BEGIN   DECLARE @EnvironmentID AS INT   SELECT @EnvironmentID = [reference_id]    FROM SSISDB.[internal].[environment_references] WITH(NOLOCK)    WHERE [environment_name] = @EnvironmentName     AND [environment_folder_name] = @ProjectFolder      EXEC [SSISDB].[catalog].[create_execution]     @package_name=@PackageName,     @execution_id=@execution_id OUTPUT,     @folder_name=@ProjectFolder,     @project_name=@ProjectName,     @reference_id=@EnvironmentID,     @use32bitruntime=@Use32BitRunTime; END

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >