Search Results

Search found 35244 results on 1410 pages for 'version numbers'.

Page 721/1410 | < Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >

  • Moving AI in a multiplayer game

    - by Smallbro
    I've been programming a multiplayer game and its coming together very nicely. It uses both TCP and UDP (UDP for movement and TCP for just about everything else). What I was wondering was how I would go about sending multiple moving AI without much lag. At first I used TCP for everything and it was very slow when people moved. I'm currently using a butchered version of this http://corvstudios.com/tutorials/udpMultiplayer.php for my movement system and I'm wondering what the best method of sending AI movements is. By movements I mean the AI chooses left/right/up/down and the player can see this happening. Thanks.

    Read the article

  • How to retrieve data from a corruped volume

    - by explorex
    Hi, My Ubuntu 10.10 just crashed(probably due to hardware error and in the end I was getting error like Unknown filesystem ..... grub> .. GRUB console before i could take some action) and i reinstalled the same version form USB stick. I had ubuntu installed in ext4 file system and I am also having the same filesystem in the same hard disk on different drive. When I try to access my previous filesystem, i get error Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda6, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so I had some important files in the previous volume, I don't know how to retrieve them. And what are the chances that I would get the same outcome (hardware error)? Please help me!

    Read the article

  • De nouvelles informations sur Windows 8 et son Windows Server associé révélées au compte goutte par

    Mise à jour du 10.06.2010 par Katleen De nouvelles informations sur Windows 8 et son Windows Server associé révélées au compte goutte par un cadre de Microsoft Si l'on se réfère au cycle de vie des produits de Microsoft, on constate qu'au niveau des clients et des serveurs d'OS les sorties alternent entre une majeure, puis une mineure, et ainsi de suite, tous les deux ans. La mise à jour la plus récente de la version serveur de Windows 7 s'appelle Windows Server 2008 R2, et elle était mineure (sortie en 2009). On peut donc logiquement s'attendre à des changements majeurs pour la prochaine mouture. Dans une interview récente, Bob Muglia, Président de l'unité Tools and Servers chez Micr...

    Read the article

  • VS2010 crashes when opening a vsp generated using VS 2012

    - by Tarun Arora
    I recently profiled some web applications using Visual Studio 2012, a vsp (Visual Studio Profile) file was generated as a result of the profiling session. I could successfully open the vsp file in Visual Studio 2012 as expected but when I tried to open the vsp file in Visual Studio 2010 the VS2010 IDE crashed. As a responsible citizen I raised bug # 762202 on Microsoft Connect site using the Microsoft Visual Studio 2012 Feedback Client. Note – In case you didn’t already know, VSP generated in Visual Studio 2012 is not backward compatible. Please refer below for the steps to reproduce the issue and the resolution of the connect bug. 1. Behaviour and Steps to Reproduce the Issue Description I have generated a vsp file by using the Visual Studio 2012 Standalone profiler. When I try and open the vsp file in Visual Studio 2010 the IDE crashes. I understand that a vsp generated by using VS 2012 cannot be opened in VS 2010, but the IDE crashing is not the behaviour I would expect to see. Steps to Reproduce the Issue 1. Pick up the Stand lone profiler from the VS 2012 installation media. The folder has both x 64 and x86 installer, since the machine I am using is x64 bit. I have installed the x64 version of the standalone profiler. 2. I have configured the system path by setting the 'environment variable' path to where the profiler is installed. In my case this is, C:\Program Files (x86)\Microsoft Visual Studio 11.0\Team Tools\Performance Tools 3. Created a new environment variable _NT_SYMBOL_PATH and set its value to CACHE*C:\SYMBOLSCACHE;SRV*C:\SYMBOLSCACHE*HTTP://MSDL.MICROSOFT.COM/DOWNLOAD/SYMBOLS;\\FOO\BUILD1234 4. Open up CMD as an administrator and run 'VSPerfASPNETCmd /tip http://localhost:56180/ /o:C:\Temp\SampleEISK.vsp' 5. This generates the following message on the cmd       Microsoft (R) VSPerf ASP.NET Command, Version 11.0.0.0     Copyright (C) Microsoft Corporation. All rights reserved.     Configuring and attaching to ASP.NET process. Please wait.     Setting up profiling environment.     Starting monitor.     Launching ASP.NET service.     Attaching Monitor to process.     Launching Internet Explorer.     The profiler is attached to ASP.net. Please run your application scenario now.     Press Enter to stop data collection...   6. I perform certain actions and then I come back to the cmd and hit enter to shut down the profiling. Once I do this, the following message is written to the cmd, Press Enter to stop data collection... Profiling now shut down. Report file "C:\Temp\SampleEISK.vsp" was generated. Running VsPerfReport, packing symbols into the .VSP. Shutting down profiling and restarting ASP.NET. Please wait. Restarting w3wp.exe.   7. I look in the C:\Temp folder and I can see the SampleEISK.vsp file generated. I can successfully open this file in Visual Studio 2012. 8. When I am trying to open the vsp file in VS 2010 the VS 2010 IDE crashes. Kaboooom! What I would expect to happen I expect to receive a message "VS 2010 does not support the vsp file generated by VS 2012". What actually happened The VS 2010 IDE crashed 2. Resolution This is a valid bug! However, there isn’t much value in releasing a hotfix for this issue. Refer below to the resolution provided by the Visual Studio Profiler Team.  Thank you for taking the time to report this issue. We completely agree that Visual Studio 2010 should not crash. However in this particular case this is not a bug we are going to retroactively release a fix to 2010 for at this point. Given that a fix would not unblock the scenario of opening a 2012 created file on Visual Studio 2010, and there is not an active update channel for Visual Studio 2010 other than manually locating and installing hot fixes, we will not be fixing this particular issue. Best Regards, Visual Studio Profiler Team   Though it would be great to improve the behaviour however, this is not a defect that would stop you from progressing in any way. It’s important to note however that VSP files generated by Visual Studio 2012 are not backward compatible so you should refrain from opening these files in Visual Studio 2010.

    Read the article

  • How to add a holding page in front of a domain

    - by Jason Bradberry
    I have set up a holding page to announce a new version of a website coming soon. I wanted people to still be able to access the original site, so my approach was to place the holding page in the root folder on the server, and move the original site to a subfolder and link to it from the holding page. However, on testing this setup it appears to have hurt the SEO placing of the website. Is there a better approach to this? I'm a bit stumped as I want both to share the same URL.

    Read the article

  • Will using two different tracking codes affect my SERP

    - by Danny Hefer
    Hello everyone and thanks for your time! I am now facing a problem after a site migration. New site is basically an improved version of old site, with the same content and some extras. After pointing the domain name to the new site, the old site was still online for a while but didn't get any traffic. The new site has its own tracking code. So, old tracking code has age (something like 7 years) but no visitors for a month, but new tracking code is a month old with an acceptable traffic. How to you think google will react if I add old tracking code to new site? Thanks by advance!

    Read the article

  • Give a session on C++ AMP – here is how

    - by Daniel Moth
    Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on. Please get the slides and the recording from channel9 (I'll refer to slide numbers below). This is how I've been presenting the C++ AMP session: Context (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample (for VS 11 Developer Preview). (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP. Code (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves. (slide 12-13) index<N>, extent<N>, and grid<N>. (Slide 14-16) array<T,N>, array_view<T,N> and comparison between them. (Slide 17) parallel_for_each. (slide 18, 21) restrict. (slide 19-20) actual restrictions of restrict(direct3d) – the slides speak for themselves. (slide 22) bring it altogether with a matrix multiplication example. (slide 23-24) accelerator, and accelerator_view. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!]. IDE (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP we hope will come at the Beta timeframe, which is when I'll be spending more time talking about it. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11. Summary (slide 39) Re-iterate some of the points of slide 7, and add the point that the C++ AMP spec will be open for other compiler vendors to implement, even on other platforms (in fact, Microsoft is actively working on that). (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.   "But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X" If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Cannot install nautilus elementary.

    - by coklatua
    when I try apt-cache policy nautilus it shows this, Installed: 1:2.32.0-0ubuntu1-ppa1 Candidate: 1:2.32.0-0ubuntu1-ppa1 Version table: *** 1:2.32.0-0ubuntu1-ppa1 0 100 /var/lib/dpkg/status 1:2.32.0-0ubuntu6~ppa160 0 500 http://ppa.launchpad.net/am-monkeyd/nautilus-elementary-ppa/ubuntu/ maverick/main amd64 Packages 1:2.32.0-0ubuntu1.1 0 500 http://archive.ubuntu.com/ubuntu/ maverick-updates/main amd64 Packages 1:2.32.0-0ubuntu1 0 500 http://archive.ubuntu.com/ubuntu/ maverick/main amd64 Pack As you can see I allready add the am-monkeyd ppa but when i'm update & upgrade nothing change.

    Read the article

  • How to secure robots.txt file?

    - by CompilingCyborg
    I would like for User-agents to index my relative pages only without accessing any directory on my server. As initial thought, i had this version in mind: User-agent: * Disallow: */* Sitemap: http://www.mydomain.com/sitemap.xml My Questions: Is it correct to block all directories like that - Disallow: */*? Would still search engines be able to see and index my sitemap if i disallowed all directories? What are the best practices for securing the robots.txt file? For Reference: Here is a good tutorial for robots.txt #Add this if you want to stop Alexa from indexing your site. User-agent: ia_archiver Disallow: / #Add this to stop duggmirror User-agent: duggmirror Disallow: / #Add this to allow specific agents User-agent: Googlebot Disallow: #Add this to allow all agents while blocking specific directories User-agent: * Disallow: /cgi-bin/ Disallow: /*?*

    Read the article

  • LGPL License in commercial application

    - by Jacob
    I have searched around but I don't seem to be able to get a clear answer on my questions that I understand. I want to use the Xuggler library in my application, which is licensed either GPL or LGPL depending on whether I compile it myself. I don't intend to edit the library If I compile it myself and thus get a LGPL version of the library, can I use it in a commercial application without having to distribute the source code of my application? Furthermore, do I have to give my application the LGPL license as well? What other problems might using this library give me?

    Read the article

  • Repacked proprietary software keeps updating the same deb

    - by Johannes
    I repacked a proprietary program delivered as tar file to a deb file for having a company wide repository. I used reprepro to set up a repository and signed it. A unix timestamp is faking a versioning numbering, so I can have different (real) versions installed at the same time. Almost everything works as expected. The deb file looks like this: mysoft8.0v6_1366455181_amd64.deb Only problem on a client machine it tries to install the same deb file over and over again because it thinks its an update. What do I miss: control file in deb package looks like this: Package: mysoft8.0v6 Version: 1366455181 Section: base Priority: optional Architecture: amd64 Installed-Size: 1272572 Depends: Maintainer: me Description: mysoft 8.0v6 dpkg repackaging and the config in the repository: /mirror/mycompany.inc/conf/distributions: Origin: apt.mycompany.inc Label: apt repository Codename: precise Architectures: amd64 i386 Components: main Description: Mycompany debian/ubuntu package repo SignWith: yes Pull: precise Help much appreciated Added guide: This Is the guide I used to create the repository.

    Read the article

  • What's New in OIC Analytics 11g?

    - by LuciaC
    Oracle Incentive Compensation (OIC) Analytics for Oracle Data Integrator (ODI) breaks down traditional front and back office silos bringing together sales performance data with those responsible for the sale and selling costs. It is a framework for Sales Performance Management  based on a data mart of key performance metrics regardless of whether or not these metrics are incentivized.Commissionable metrics are brought into OIC for commission calculation and brought back to enrich the performance data mart.  Executives and Product Marketing/Product Line Managers are provided with actionable sales performance analytics.  Incentivized salesreps and partners are provided with commission dashboards on a frequent basis to inform them how they are doing and how far they are from their goals.OIC Analytics is now certified with 11g and has additional features.  Oracle continues to invest in OIC Analytics but the baseline for the investments will be the 11gR1 certification version of OIC Analytics.  Read about what's new and the certification details in Doc ID 1590729.1.

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • Dell wireless not working after upgrade

    - by Omer Saeed
    So, the short version of my sad story is that I tried upgrade Ubuntu to 12.04 and wireless driver has stopped working. I have tried all the solutions but nothing seem to be working. When I try to install my wireless from "Additional Drivers" Its says: Sorry, installation of this driver failed. Please have a look at the log file for details: /var/log/jockey.log The lspci command gives me the following info about wireless: 0c:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) I have tried removing bcm drivers and reinstalling, but nothing seems to be working. rfkill is good too.

    Read the article

  • Migrating Core Data to new UIManagedDocument in iOS 5

    - by samerpaul
    I have an app that has been on the store since iOS 3.1, so there is a large install base out there that still uses Core Data loaded up in my AppDelegate. In the most recent set of updates, I raised the minimum version to 4.3 but still kept the same way of loading the data. Recently, I decided it's time to make the minimum version 5.1 (especially with 6 around the corner), so I wanted to start using the new fancy UIManagedDocument way of using Core Data. The issue with this though is that the old database file is still sitting in the iOS app, so there is no migrating to the new document. You have to basically subclass UIManagedDocument with a new model class, and override a couple of methods to do it for you. Here's a tutorial on what I did for my app TimeTag.  Step One: Add a new class file in Xcode and subclass "UIManagedDocument" Go ahead and also add a method to get the managedObjectModel out of this class. It should look like:   @interface TimeTagModel : UIManagedDocument   - (NSManagedObjectModel *)managedObjectModel;   @end   Step two: Writing the methods in the implementation file (.m) I first added a shortcut method for the applicationsDocumentDirectory, which returns the URL of the app directory.  - (NSURL *)applicationDocumentsDirectory {     return [[[NSFileManagerdefaultManager] URLsForDirectory:NSDocumentDirectoryinDomains:NSUserDomainMask] lastObject]; }   The next step was to pull the managedObjectModel file itself (.momd file). In my project, it's called "minimalTime". - (NSManagedObjectModel *)managedObjectModel {     NSString *path = [[NSBundlemainBundle] pathForResource:@"minimalTime"ofType:@"momd"];     NSURL *momURL = [NSURL fileURLWithPath:path];     NSManagedObjectModel *managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL];          return managedObjectModel; }   After that, I need to check for a legacy installation and migrate it to the new UIManagedDocument file instead. This is the overridden method: - (BOOL)configurePersistentStoreCoordinatorForURL:(NSURL *)storeURL ofType:(NSString *)fileType modelConfiguration:(NSString *)configuration storeOptions:(NSDictionary *)storeOptions error:(NSError **)error {     // If legacy store exists, copy it to the new location     NSURL *legacyPersistentStoreURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTime.sqlite"];          NSFileManager* fileManager = [NSFileManagerdefaultManager];     if ([fileManager fileExistsAtPath:legacyPersistentStoreURL.path])     {         NSLog(@"Old db exists");         NSError* thisError = nil;         [fileManager replaceItemAtURL:storeURL withItemAtURL:legacyPersistentStoreURL backupItemName:niloptions:NSFileManagerItemReplacementUsingNewMetadataOnlyresultingItemURL:nilerror:&thisError];     }          return [superconfigurePersistentStoreCoordinatorForURL:storeURL ofType:fileType modelConfiguration:configuration storeOptions:storeOptions error:error]; }   Basically what's happening above is that it checks for the minimalTime.sqlite file inside the app's bundle on the iOS device.  If the file exists, it tells you inside the console, and then tells the fileManager to replace the storeURL (inside the method parameter) with the legacy URL. This basically gives your app access to all the existing data the user has generated (otherwise they would load into a blank app, which would be disastrous). It returns a YES if successful (by calling it's [super] method). Final step: Actually load this database Due to how my app works, I actually have to load the database at launch (instead of shortly after, which would be ideal). I call a method called loadDatabase, which looks like this: -(void)loadDatabase {     static dispatch_once_t onceToken;          // Only do this once!     dispatch_once(&onceToken, ^{         // Get the URL         // The minimalTimeDB name is just something I call it         NSURL *url = [[selfapplicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTimeDB"];         // Init the TimeTagModel (our custom class we wrote above) with the URL         self.timeTagDB = [[TimeTagModel alloc] initWithFileURL:url];           // Setup the undo manager if it's nil         if (self.timeTagDB.undoManager == nil){             NSUndoManager *undoManager = [[NSUndoManager  alloc] init];             [self.timeTagDB setUndoManager:undoManager];         }                  // You have to actually check to see if it exists already (for some reason you can't just call "open it, and if it's not there, create it")         if ([[NSFileManagerdefaultManager] fileExistsAtPath:[url path]]) {             // If it does exist, try to open it, and if it doesn't open, let the user (or at least you) know!             [self.timeTagDB openWithCompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Opened the file--it already existed");                     [self refreshData];                 }             }];         }         else {             // If it doesn't exist, you need to attempt to create it             [self.timeTagDBsaveToURL:url forSaveOperation:UIDocumentSaveForCreatingcompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Created the file--it did not exist");                     [self refreshData];                 }             }];         }     }); }   If you're curious what refreshData looks like, it sends out a NSNotification that the database has been loaded: -(void)refreshData {     NSNotification* refreshNotification = [NSNotificationnotificationWithName:kNotificationCenterRefreshAllDatabaseData object:self.timeTagDB.managedObjectContext  userInfo:nil];     [[NSNotificationCenter defaultCenter] postNotification:refreshNotification];     }   The kNotificationCenterRefreshAllDatabaseData is just a constant I have defined elsewhere that keeps track of all the NSNotification names I use. I pass the managedObjectContext of the newly created file so that my view controllers can have access to it, and start passing it around to one another. The reason we do this as a Notification is because this is being run in the background, so we can't know exactly when it finishes. Make sure you design your app for this! Have some kind of loading indicator, or make sure your user can't attempt to create a record before the database actually exists, because it will crash the app.

    Read the article

  • Forcing Nautilus to use Kerberos (Active Directory) authentication

    - by user14146
    Is there a way to get Nautilus or any other file manager that runs on Ubuntu 11.04 to use Kerberos for authentication? I'm using Likewise Open to join machines to the domain, and I can't type in passwords for every user on every computer that needs to mount a network share. I've been able to get Kerberos working with the command line smbclient, but oddly Kerberos does not seem to be Nautilus-integrated. I also checked the SSH config file, and it looks like you can enable GSSAPIAuthentication, but it only works for Kerberos v2, not the current version, which I think is v5.

    Read the article

  • nVidia GeForce Go 7600? can it ever run unity?

    - by Khaled Musleh
    my laptop Toshiba Qosmio G30 has nVidia GeForce Go 7600 card and it suppose to support 3D . i run unity 2d now . I run 12.04 and the graphic driver is--VESA: G73 Board - toshg73m-- by UBUNTU. when i run /usr/lib/nux/unity_support_test -p then i get this list Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no the card is not blacklisted but a similar one with GT is! Do you think that there is a chance the laptop can run the unity 3d? and may be i could change the resolution of the screen to a higher one too! I tried all the nvidia drivers provided but none works (except 96 in ubuntu 12.04 ). i get a black screen or terminal screen. best wishes to all

    Read the article

  • Windows XP self-installing virus [closed]

    - by Oliver
    Do you remember. Some years ago, there was a huge virus attacking Windows XP in its first version. Once you had installed Windows XP, and on your first internet access, the virus installed itself on your computer, closing your internet connection and making the computer reboot after some seconds. I wonder... How can a virus install itslef this way from nowhere ? Without any user action. You install Windows XP... the computer just connects itself to the internet (assuming Microsoft don't connect to bad sites on its first connection)... and you have a virus. There is something magic I don't understand here. Can someone explain me how that virus could attack Windows that way, without any user action on a fresh installed system...

    Read the article

  • What's up with all the updates? [closed]

    - by Bob Babb
    I use Ubuntu exclusively for my job, especially for the fact that everything works and I get the most out of my processor and memory, but you are killing me with updates! I just lost a very good opportunity from a client that installed Ubuntu but got tired of all the updates. I really can't argue the fact. In a matter of a day I had two software updates. Quote from customer: "It's sad that I come in at 6:00 in the morning to install updates from a LTS version, and then before I leave at the end of the day I have 19 new updates to install. At least Microsoft bundles them in controllable groups." Sadly I have to agree, guys you have to do something about this. Please!

    Read the article

  • How to organise projects with dependencies on BitBucket?

    - by Timwi
    Both Mercurial and BitBucket make one fundamental assumption: 1 repo = 1 project. If I have a project that has a dependency (a library) which is shared by many projects, this assumption gets in the way. Now it is no longer possible to have a separate BitBucket page for each project while still being able to commit atomic revisions to multiple projects. If I put all the projects into one repo, they all become one “project” on BitBucket. If I put them in separate repos, it is no longer possible to know which version of the library project was in use at revision X of a dependent project. How is this situation normally solved on BitBucket, or is there explicitly no support for this common scenario?

    Read the article

  • Attached Property port of my Window Close Behavior

    - by Reed
    Nishant Sivakumar just posted a nice article on The Code Project.  It is a port of the MVVM-friendly Blend Behavior I wrote about in a previous article to WPF using Attached Properties. While similar to the WindowCloseBehavior code I posted on the Expression Code Gallery, Nishant Sivakumar’s version works in WPF without taking a dependency on the Expression Blend SDK. I highly recommend reading this article: Handling a Window’s Closed and Closing Events in the View-Model.  It is a very nice alternative approach to this common problem in MVVM.

    Read the article

  • OpenGraph tags and HTML5 validity

    - by netmano
    I have a HTML5 based page, and I inculded the OpenGraph tags according to it's documentation. Also I checked with Facebook Debug, and it can parse the metadata. But when I use W3C Validator, it reports the OG tags as error: Attribute content not allowed on element meta at this point. <meta property="fb:admins" content="...." /> Attribute content not allowed on element meta at this point. <meta property="og:url" content="http://www...."> They are all in the <head>. I would need my page be "valid" HTML5 and OG tags, as well. Could you help me giving a hint how can it be achieved? UPDATE: The name version also invalid: <meta name='fb:admins' content=''>

    Read the article

  • Efficient existing rating system for multiplayer?

    - by Nikolay Kuznetsov
    I would like to add a rating for online version of a board game. In this game there are many game rooms each normally having 3-4 people. So I expect that player's rating adjustments (RA) should depends on Rating of opponents in the game room Number of players in game room and final place of a player Person gets rating increase if he plays more games and more frequently If a person leaves a game room (disconnect) before the game ends he should get punished with a high rating decrease I have found two related questions in here Developing an ELO like point system for a multiplayer gaming site Simplest most effective way to rank and measure player skill in a multi-player environment? Please, let me know what would be the most appropriate existing rating model to refer.

    Read the article

  • Software design methods for Java or any other programming language

    - by IkerB
    I'm junior programmer and I would like to know how professionals write their code or which steps they follow when they are creating new software. I mean, which steps they follow, which programming methodology, software architecture design application software, etc. I would like to find a tutorial where they explain from the beginning which steps I have to follow from The Idea I have in my mind to the final version of the application in any language. Or perhaps how is your programming steps or rules that you used to follow. Because everytime I want to create the an application I spend few time on the design and a lot of time coding (I know, that's not good).

    Read the article

  • have a missing file, and cant run ubuntu upon start up, and hasnt the flash drive i used didnt work either, any help?

    - by trent
    when I first downloaded Ubuntu to install it, it downloaded quick and caused no problems to my laptop. but then when it came time to install it as one of my operating systems, it wouldn't run because it said a file was missing or damaged. so then I uninstalled it and re-installed it but I still had that same problem. so I tried a flash-drive version but that didn't work either because it wouldn't boot from that, nor would it detect my flash drive, (in this case, an old 4 GB mp3 player). any help or tips and ideas would be great, I just need a decent operating system because windows is getting to be terrible at the moment, just note I wanna run Ubuntu along side windows 8.1

    Read the article

< Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >