Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 247/916 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Virtualbox will run from 32bit installation CD but not 64bit.

    - by MetaDark
    I am trying to set up Windows in Virtual Box, so I don't need to reboot in the rare occasion that I actually need it. The problem is, Virtual Box doesn't preform any errors when I insert the 32bit installation CD but when I try to use the 64bit installation; What!? I am already using the installation disc! I've checked my BIOS to see if I have SVM (AMD's version of VT) disabled and all I see is "Enabled" I have a K9N6PGM2-V2 motherboard A Triple Core AMD Athlon II A Nvdia NForce 430 integrated graphics card 4GB of RAM An 80GB IDE And a 1TB SATA I don't think the last three specifications matter but just in case XP I am pretty sure the CD isn't broken ( I am going to make sure in just a moment ), what could be the cause to this problem? Edit: The 64bit installation CD is not broken, but I found out when trying to install from the 32bit version that it's trying to upgrade, not preform a fresh install - Odd.

    Read the article

  • g++ cannot find include files (qt3)

    - by Allan
    allan@allan-VirtualBox:~/blackjack_for_the_hopelessly_luckless$ make g++ -c -pipe -g -Wall -W -O2 -D_REENTRANT -DQT_NO_DEBUG -DQT_THREAD_SUPPORT -DQT_SHARED -DQT_TABLET_SUPPORT -I/usr/share/qt3/mkspecs/default -I. -I. -I/usr/include/qt3 -o advicewindow.o advicewindow.cpp advicewindow.cpp:32:19: fatal error: QWidget: No such file or directory compilation terminated. make: *** [advicewindow.o] Error 1 allan@allan-VirtualBox:~/blackjack_for_the_hopelessly_luckless$ qt3 was installed using apt-get. Header files are located in /usr/include/qt3/ Is there a g++ config file or something I need to update? I'm new to compiling from source and not sure what to do. Makefile was created using Qmake from project file. Files in include directory are all lower case, should I change the code in advicewindow.cpp to qwidget.h? Any help appreciated. Thanks.

    Read the article

  • Decrease filesize when resizing with mogrify

    - by plua
    I love the command line options of imagemagick. Mogrify is great to resize images and change quality, which is what I use most often. However, I have noted that the filesize if often larger than what it should be. Especially with small images. For instance, I have a regular 640px (width) photo, which I change to quality 80 and a width of 80px: mogrify -quality 80 -resize 80 file.jpg Works well and my image gets resized and the quality is changed to 80. However, the filesize is around 40Kb. For such a tiny image, that is huge! When I use mtPaint, and open the file and save it (not changing anything, just CTRL+O, CTRL+S), the filesize decreases with more than 95% to less than 2Kb! I have seen this is often the case. What goes wrong?

    Read the article

  • How to control an Ubuntu PC from another Ubuntu PC over Internet?

    - by Naveen
    There are two Ubuntu PCs called A and B. A and B are connected to the Internet using two separated Internet connections. (In my case, two mobile broadband connections ppp0 x2 ) Each connection has a unique & static public IP address. What I need is to control A computer's cursor, using B computer's mouse, over the Internet. In both computers, I have allowed other users to control my computer in Desktop Sharing preferences, as below: When I try to connect to A from the B using Remmina Remote Desktop Client, it refuses to connect after trying for a while. These are my settings: I expect this to be done from an available open source software, not from TeamViewer. I found this guide harder to understand. Please provide me clear instructions... Thanks for having a look!

    Read the article

  • Is it legal to ask photo ID and credit card copy in the U.S

    - by selim
    I regularly order from online shops around the world and I have not see any case where the company asks for photo ID or credit card copy. Yesterday I make an order from linode.com and my order is on hold because of their "fraud check system". Is it common to ask those info in U.S. where I have never asked such info in here (Istanbul, Turkey). And I already asked what is their motive and legal stand and the reason my order is hold by their fraud system. And I also added whether is because I live in Istanbul, Turkey. Their answer was as following: "We would not be able to disclose specific information related to our fraud system." And I'm asked repeatedly whether I want to cancel my order or not. I dont questioning reputation of linode.com if I think so, I did not make an order. I think asking for photo ID is neither legal nor provide any security.

    Read the article

  • mounting external hard drive EXT4: "the unlocked device does not have a reckognizable filesystem on it"?

    - by user824924
    I'm having problems mounting ext4 partitions(inside a LUKS partition) in external drives. The drives are fine, there is no problem whatsoever with the drives and no filesystem corruption. This happened since a recent automatic system upgrade, and a manual upgrade to kernel 3.12.0. It goes like this: I plug in the external drive Passphrase is asked for luks device luks partition correctly unlocked/opened Instead of proceding with mounting the now exposed ext4 partition there's a pop-up saying: "the unlocked device does not have a recognizable filesystem on it". Same happens in this case: $ gvfs-mount -d /dev/sdc2 Enter a passphrase to unlock the volume The passphrase is needed to access encrypted data on WDC WD250... (250 GB Hard Disk). Password: Error mounting /dev/sdc2: The unlocked device does not have a recognizable file system on it Doing a manual sudo mount /dev/dm-1 /mnt/testfolder works with no errors and there is no problem with the filesystem (fscked). Also there doesn’t seem to be anything useful written to dmesg when this happens. What gives?

    Read the article

  • How to host a simple website using a domain name I own

    - by Cedric Martin
    I'm familiar with hosting webapps when I'm doing "the whole shebang" of installing / configuring / setting up Apache/Tomcat/PostreSQL / "coding" the website myself using HTML / JSP / CSS etc. on dedicated servers I'm renting. But in the above case, I'm "owning" the entire stack: from the Debian GNU/Linux dedicated servers to every single file that is served. Now I'd like to do something much simpler and I must admit I don't know what's involved at all. I'd like to host a simple website made of only a few static pages (no database, no nothing) and I'd like it to be accessible from "example.com". What needs to be technically done to have such a thing? How is the DNS supposed to be set up? Note that I do not want to host this on one of my dedicated servers.

    Read the article

  • Can't mount windows partition?

    - by C.J.
    When I try to open the Windows Partition from Ubuntu I receive the error: Unable to mount 55 GB Filesystem Error mounting: mount exited without exit code 13: ntfs_mst_post_read_fixup_warn: magic: 0x04010400 size: 1024 usa_ofs: 1026 usa_count: 1026: Invalid argument Record 6 has no FILE magic (0x4010400) Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sda2': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more detail. Additionally, I can't open the Windows Partition. I've tried updating it many times but it won't show up on GRUB. Does anybody know what all this means? And how I might fix it? I thank you for any help in advance

    Read the article

  • Texturize a shape of multiple triangles in 2D

    - by Deukalion
    This is an example of a shape consisting of multiple points, triangles and eventually a shape: Red Dots = Vector3 (X, Y, Z) or Vector2 (X, Y) If I have a Texture of a certain size, how do I texturize this area in the best way so that the texture inside the shape matches the shape and does not overlap anywhere? Perhaps also with a chance to scale the texture in case it's too small or to big for the shape, but still so that it gets rendered correctly. Do I treat the shape as a rectangle? Figure out it's 4 corners? Or do I calculate the distance between Center - (Texture Width / 2) and Point (to see how "many" times the texture can fit between on that axis to estimate at what Coordinates the Texture should be at that certain point? I've looked at Texture Mapping but haven't found any concrete examples that it explains it well, it's also confusing with 0.0-1.0 values for Texture Coordinates.

    Read the article

  • ASP.NET AJAX Release History : Q1 2010 SP2 (version 2010.1.519)

    Common for all controls What's Fixed Fixed: Problem with RegisterWithScriptManager=false (MVC) and browsers without gzip support Fixed: $telerik.getLocation implementation, in case the element is a child of an element with position:fixed Visual Studio Extensions What's New Improved: Visual Studio 2010 support for upgrade notifications What's Fixed Fixed: Code language selection in WebSite does not work. RadAjax What's Fixed Fixed: RadAjax controls break on AJAX callback with ToolkitScriptManager...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Unity Desktop Displays strange lines

    - by Alex Holsgrove
    Didn't quite know what title to give this problem, but hopefully the screenshot will explain more. I am running a Samsung R60+ laptop on Ubuntu 13.10 with a Radeon X1250 GPU. After I login and the Unity desktop shows, I can see these strange lines at the top of the screen. I presumed it was perhaps a driver issue and found this article to see if I could resolve the issue: https://help.ubuntu.com/community/RadeonDriver I cannot get on with Unity at all (where are all of the menus gone!) so perhaps reverting back to Gnome may be a solution in my case? I'd welcome any ideas please.

    Read the article

  • How important are unit tests in software development?

    - by Lo Wai Lun
    We are doing software testing by testing a lot of I/O cases, so developers and system analysts can open reviews and test for their committed code within a given time period (e.g. 1 week). But when it come across with extracting information from a database, how to consider the cases and the corresponding methodology to start with? Although that is more likely to be a case studies because the unit-testing depends on the project we have involved which is too specific and particular most of the time. What is the general overview of the steps and precautions for unit-testing?

    Read the article

  • Studies on code documentation productivity gains/losses

    - by J T
    Hi everyone, After much searching, I have failed to answer a basic question pertaining to an assumed known in the software development world: WHAT IS KNOWN: Enforcing a strict policy on adequate code documentation (be it Doxygen tags, Javadoc, or simply an abundance of comments) adds over-head to the time required to develop code. BUT: Having thorough documentation (or even an API) brings with it productivity gains (one assumes) in new and seasoned developers when they are adding features, or fixing bugs down the road. THE QUESTION: Is the added development time required to guarantee such documentation offset by the gains in productivity down-the-road (in a strictly economical sense)? I am looking for case studies, or answers that can bring with them objective evidence supporting the conclusions that are drawn. Thanks in advance!

    Read the article

  • Error installing Windows7 64 bits on VirtualBox

    - by MetaDark
    I am trying to set up Windows in Virtual Box, so I don't need to reboot in the rare occasion that I actually need it. The problem is, Virtual Box doesn't preform any errors when I insert the 32bit installation CD but when I try to use the 64bit installation; What!? I am already using the installation disc! I've checked my BIOS to see if I have SVM (AMD's version of VT) disabled and all I see is "Enabled" I have a K9N6PGM2-V2 motherboard A Triple Core AMD Athlon II A Nvdia NForce 430 integrated graphics card 4GB of RAM An 80GB IDE And a 1TB SATA I don't think the last three specifications matter but just in case XP I am pretty sure the CD isn't broken ( I am going to make sure in just a moment ), what could be the cause to this problem? Edit: The 64bit installation CD is not broken, but I found out when trying to install from the 32bit version that it's trying to upgrade, not preform a fresh install - Odd.

    Read the article

  • How can I optimise ext4 for reliability?

    - by amin
    As ext4 was introduced as more reliable than ext3 with block journals, is there any chance to suppose it 100% reliable? What if enabling block journaling on it, which is disabled by default? As friend's guide to explain my case in more detail: I have an embedded linux device, after installation keyboard and monitor is detached and it works standalone. My duty is to make sure it has reliable file-system so with errors there is no way for manual correct faults on device. I can't force my customer to use a ups with each device to ensure no fault by power-failure. What more can ext4 offer me besides block journaling? Thanks in advance.

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Bare-metal mode for Ubuntu

    - by user1071136
    I'm interested to benchmark a console-mode application, and would like to reduce to a minimum any interferences from other processes in the system. Is there an easy way to boot into Ubuntu 12.04 in a "bare-metal" mode ? I'm still interested in casually booting a "desktop" version of Ubuntu (so will prefer to avoid permanent changes), and would like to avoid installing a separate Ubuntu-server version. My use-case is the following - Application is single-thread and console-mode only. Test-box has 12GB of memory. I ssh into the test-box. Seems I can skip at least Unity, X-server and their dependents.

    Read the article

  • Result class dependency

    - by Stefano Borini
    I have an object containing the results of a computation. This computation is performed in a function which accepts an input object and returns the result object. The result object has a print method. This print method must print out the results, but in order to perform this operation I need the original input object. I cannot pass the input object at printing because it would violate the signature of the print function. One solution I am using right now is to have the result object hold a pointer to the original input object, but I don't like this dependency between the two, because the input object is mutable. How would you design for such case ?

    Read the article

  • ReportViewer configuration

    - by Suresh Behera
    Some of my team member was facing configuring report viewer.Most of the post are confusing or not able to understand properly. This is what we concluded and thought to put a quick note on it. No 1 Section : This is the report name No 2 Section : ReportServer make the default name on URL sometime even if you don’t see on browser you still need to try with “Reportserver” on url. In our case the DB/report team did not mentioned nothing about “ReprtServer” on URL but it was needed. No 3. “Adventure Works...(read more)

    Read the article

  • Connecting People, Processes, and Content: An Online Event

    - by Brian Dirking
    This morning we announced a new online event, “Transform Your Business by Connecting People, Processes, and Content.” At this event you will learn how an integrated approach to business process management (BPM), portals, content management, and collaboration can help you make more accurate and timely decisions based on the collective knowledge across your organization. But more than that, this event will focus on how customers have been successful transforming to a social enterprise. We’ve blogged about a few of the in the past few weeks – Balfour Beatty, New Look, Texas A&M. This event will give you an opportunity to learn about other customers and their successes, as well as an opportunity to: Watch Oracle executives participate in a roundtable discussion on the state of the social enterprise Hear industry experts discuss best practices and case studies of leveraging BPM, portals, and content management to transform and improve business processes Engage the experts by having your questions answered in real time Register today and learn how Oracle Fusion Middleware provides the most complete, open, integrated, and best-of-breed solution in the industry for transforming your business.

    Read the article

  • Happy New Year from Oracle Technology Network!

    - by Cassandra Clark
    Happy New Year from the Oracle Technology Network team! All year long we have been working hard to bring you new member only offers and discounts. This month our partners have extended their offers an extra month in case you missed taking advantage of them due to the holidays. Visit the OTN Member Benefit Page today! Get discounts on Oracle Press, Packt Publishing, Manning, Apress, O'Reilly and CRC Press books. We also have discounts on Oracle products (Weblogic Server this month), fun wallpapers to download, discounts on industry events (QCon London) and on the Dr. Dobb's DVD release 6. If you'd like to see any offers/discounts added please respond in the comment section or take the OTN Membership Survey before it closes at the end of this month.

    Read the article

  • Difference between ~/folder and /home/username/folder when creating a path in /etc/environment

    - by r0xx4nne
    I had an executable script on my ubuntu located on ~/project/ directory and I tried to add that path to /etc/environment . So , I edit the path to this PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:~/project/" . Then , I logout and login back , open the terminal as su and run the command to execute my script on that folder but the result is command not found. Then, I change the path in /etc/environment to PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/r0xx4nne/project/" and voila it works.Now i can run the executable script inside ~/project/ without fail under su command. My question is , what's the difference between ~/project and /home/r0xx4nne/project when it comes in case of creating a path in /etc/environment ? Why it happened to be like this? I am a newbie and I just want to know more . Thanks for any reply .

    Read the article

  • How to handle updated configuration when it's already been cloned for editing

    - by alexrussell
    Really sorry about the title that probably doesn't make much sense. Hopefully I can explain myself better here as it's something that's kinda bugged me for ages, and is now becoming a pressing concern as I write a bit of software with configuration. Most software comes with default configuration options stored in the app itself, and then there's a configuration file (let's say) that a user can edit. Once created/edited for the first time, subsequent updates to the application can not (easily) modify this configuration file for fear of clobbering the user's own changes to the default configuration. So my question is, if my application adds a new configurable parameter, what's the best way to aid discoverability of the setting and allow the user (developer) to override it as nicely as possible given the following constraints: I actually don't have a canonical default config in the application per se, it's more of a 'cascading filesystem'-like affair - the config template is stored in default/config.json and when the user wishes to edit the configuration, it's copied to user/config.json. If a user config is found it is used - there is no automatic overriding of a subset of keys, the whole new file is used and that's that. If there's no user config the default config is used. When a user wishes to edit the config they run a command to 'generate' it for them (which simply copies the config.json file from the default to the user directory). There is no UI for the configuration options as it's not appropriate to the userbase (think of my software as a library or something, the users are developers, the config is done in the user/config.json file). Due to my software being library-like there's no simple way to, on updating of the software, run some tasks automatically (so any ideas of look at the current config, compare to template config, add ing missing keys) aren't appropriate. The only solution I can think of right now is to say "there's a new config setting X" in release notes, but this doesn't seem ideal to me. If you want any more information let me know. The above specifics are not actually 100% true to my situation, but they represent the problem equally well with lower complexity. If you do want specifics, however, I can explain the exact setup. Further clarification of the type of configuration I mean: think of the Atom code editor. There appears to be a default 'template' config file somewhere, but as soon as a configuration option is edited ~/.atom/config.cson is generated and the setting goes in there. From now on is Atom is updated and gets a new configuration key, this file cannot be overwritten by Atom without a lot of effort to ensure that the addition/modification of the key does not clobber. In Atom's case, because there is a GUI for editing settings, they can get away with just adding the UI for the new setting into the UI to aid 'discoverability' of the new setting. I don't have that luxury. Clarification of my constraints and what I'm actually looking for: The software I'm writing is actually a package for a larger system. This larger system is what provides the configuration, and the way it works is kinda fixed - I just do a config('some.key') kinda call and it knows to look to see if the user has a config clone and if so use it, otherwise use the default config which is part of my package. Now, while I could make my application edit the user's configuration files (there is a convention about where they're stored), it's generally not done, so I'd like to live with the constraints of the system I'm using if possible. And it's not just about discoverability either, one large concern is that the addition of a configuration key won't actually work as soon as the user has their own copy of the original template. Adding the key to the template won't make a difference as that file is never read. As such, I think this is actually quite a big flaw in the design of the configuration cascading system and thus needs to be taken up with my upstream. So, thinking about it, based on my constraints, I don't think there's going to be a good solution save for either editing the user's configuration or using a new config file every time there are updates to the default configuration. Even the release notes idea from above isn't doable as, if the user does not follow the advice, suddenly I have a config key with no value (user-defined or default). So the new question is this: what is the general way to solve the problem of having a default configuration in template config files and allowing a user to make user-specific version of these in order to override the defaults? A per-key cascade (rather than per-file cascade) where the user only specifies their overrides? In this case, what happens if a configuration value is an array - do we replace or append to the default (or, more realistically, how does the user specify whether they wish to replace or append to)? It seems like configuration is kinda hard, so how is it solved in the wild?

    Read the article

  • Web.NET is Closing Fast

    - by Chris Massey
    The voting for sessions has now closed, and sadly only half of the potential sessions could make it through. On the plus side, the sessions that floated to the top look great and, with the votes in, Simone and Ugo have moved right along and created a draft agenda to whet our appetites. Take a look, and let them know what you think. I’d also strongly recommend that you get ready to grab your tickets when they become available next week (specifically, September 18th), as places are going to be snapped up fast. In case you need a reminder as to why Web.NET is worth your time: Complete focus on web development Awesome sessions All-night hackathon Free (although I urge you to make a donation to help Simone and Ugo create the best possible event) Put October 20th in your calendar, and start packing. I’ve already booked my flights, and am perusing the list of hotels while I eat my lunch. Bonus Material There will be a full day of RavenDB training on Monday the 22nd of October, run by Ayende himself, and attending Web.NET will get you a 30% discount on the cost of the session.

    Read the article

  • Is there a taskbar applet to show the status of a remote host?

    - by Mathew
    At the end of the day I would like to be able to copy files to my home PC just in case I feel inspired to work on them in the evening. But I only want to do this if the PC is on already. (I can remote wake-on-lan the PC but I don't want to always be doing that). I would like some taskbar applet that shows the status of the PC and whether I can ssh into it or not. Obviously it would also be interesting to have an idea as to how long it is on for whilst I am at work as that gives a good indication of whether anyone is in or not. However being able to unobtrusively copy files to the remote machine is the main objective. Perhaps another approach is to run rsync on cron and if the remote host is not up then I guess it will fail. Is that correct? If anyone else has ideas on how to best sync a work and home PC then please do tell.

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >