Search Results

Search found 51790 results on 2072 pages for 'long running'.

Page 786/2072 | < Previous Page | 782 783 784 785 786 787 788 789 790 791 792 793  | Next Page >

  • How to skip words in OS X terminal?

    - by Yuval
    Say I wrote a long command in the Mac OSX terminal, i.e say "Hello, how are you? I am good thank you. How is it going with you? Fine, thanks" and now I decided I want to change the word Hello to Hi. To do that, right now I have to keep pressing (or hold down) the left keyboard key until the "cursor" gets to the end of the word Hello, and then delete it. The usual 'holding down option' technique doesn't work as it does in most other OS X applications. Is there a way to skip a word at a time instead (or any other shorter way of getting the cursor there)? Thanks!

    Read the article

  • Domain connection shows as "unauthenticated"

    - by gareth89
    I have seen various different questions for this problem floating around but either the circumstances arent the same or the solution doesnt work so thought i would post it to see if anybody has any suggestions. Various domain PCs and laptops appear to randomly give the connection name of "lewis.local 2(Unauthenticated)" - lewis.local being our domain - and provides an exclamation mark where the network type logo is normally shown. This also appears to happen every time connecting via vpn. Our setup is: 2 servers both running windows server 2003 R2 (x32) main server has AD, DNS and DHCP installed IPv4 on approx 30 client machines (some wired, some wireless) If anybody has any thoughts on solutions i would appreciate it. I have tried removing all but AD server roles, resetting all of the systems and nothing. It doesnt prevent anything from working just like a domain connection most of the time however it is getting fustrating! Also dont know if it could have anything to do with it but the DHCP server seems to have quite a long lead time on issuing the IP address to the client.

    Read the article

  • How to get Cinnamon working in Virtualbox?

    - by kavoura
    I installed Ubuntu 11.10 (32-bit) in VirtualBox 4.1.8. I wanted to install Cinnamon, so I did and I have the option to choose it from the login screen, as well as GNOME options. If I choose Ubuntu I get Unity, which works fine. GNOME also works. But when I choose Cinnamon, the screen goes black and nothing responds and I have to reset the virtual machine. I have already installed Ubuntu 11.10 (64-bit) onto the PC in a separate partition (currently running 10.10 but trying out 11.10) and in that Cinnamon works perfectly. In VirtualBox I have 3D settings enabled and gave the virtual machine 128 MB graphics RAM, and 1024 MB system RAM. What settings should I change or what should I do to get Cinnamon working in Ubuntu in VirtualBox? I have also tried doing it in LinuxMint 12, but I get the same problem, just a black screen when selecting Cinnamon. So are Cinnamon and VirtualBox incompatible?

    Read the article

  • How do I get rid of phantom bookmarks in Google Chrome on Mac OS X 10.6?

    - by Philip
    I'm running Chrome 5.0.375.38 on OS X 10.6 Snow Leopard and although I'm positive that when I installed it I told it NOT to import my Firefox bookmarks, it nevertheless still accessed my OLD Firefox bookmarks (including some that I deleted) when I used the location bar. HOWEVER, when I opened the bookmarks manager, it said that I have no bookmarks whatsoever. Seeking to solve this problem, I installed XMarks on both FF and Chrome, and forced Chrome to download the server bookmarks. Now Chrome lists all my current FF bookmarks, but STILL sees the old, phantom bookmarks from when I first installed Chrome in the location bar, even though when I search for these same bookmarks in the bookmarks manager they don't show up. Aargh! Any ideas? Even if there's some way to force-kill-wipeout-clean-erase ALL my Chrome bookmarks that's fine as long as it kills the phantom ones b/c I can still overwrite with XMarks. Thanks!

    Read the article

  • Deleted the .AppleDouble files inside my Time Machine backups - are they still OK?

    - by Jon M
    My Ubuntu server is set up to emulate a TimeCapsule (after a very long weekend following the instructions here, here and here). My macbook pro has been backing up happily to it for a month or so now, and all seems well. The other day I was tidying up the extraneous files from my music collection on the server, got a bit loose with the find command... and ended up deleting all the .AppleDouble files underneath '/', which included the Time Machine folder. Now, Time Machine still appears to work fine, it backs up regularly, I can look through all the previous versions of my files, and they seem to restore without trouble. My question is: by deleting the .AppleDouble files, have I actually broken anything? Is the TM data still good, or should I trash it and start fresh (i.e. with a new 'day 0' full backup)?

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to Install a Wireless Card in Linux Using Windows Drivers

    - by Justin Garrison
    Linux has come a long way with hardware support, but if you have a wireless card that still does not have native Linux drivers you might be able to get the card working with a Windows driver and ndiswrapper. Using a Windows driver inside of Linux may also give you faster transfer rates or better encryption support depending on your wireless card. If your wireless card is working, it is not recommended to install the Windows driver just for fun because it could cause a conflict with the native Linux driver Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar

    Read the article

  • When to use shared libraries for a web framework?

    - by CamelBlues
    tl;dr: I've found myself hosting a bunch of sites running on the same web framework (symfony 1.4). Would it be helpful if I moved all of the shared library code into the same directory and shared it across the sites? more I see some advantages to this: Each site takes up less disk space Library updates (an unlikely scenario) can take place across all sites I also see some disadvantages, mostly in terms of a single point of failure and the inability to have sites using different versions of the framework. My real concern, though, is performance. I hypothesize that I will see a performance increase, since the PHP code will already be cached for all sites when they call the framework. Is this a correct hypothesis?

    Read the article

  • How to install older ruby version on ubuntu Precise Pangolin?

    - by Codestudent
    I got a present: older Rails tutorials that needs old ruby version. I try to install ruby-1.8 with the packet manager. I still got problems with the tutorial example code. Next I try rvm to install the old ruby version. Unfortunately I got an error. I do not know what to do. I search the internet. Many people got no problems with rvm. rvm use *ERROR: Branch origin/ruby_1_8_4 not found.* and *ERROR: Error running 'GEM_PATH="/usr/share/ruby-rvm/gems/ruby-1.8.4- tv1_8_4:/usr/share/ruby-rvm/gems/ruby-1.8.4-tv1_8_4@global:/usr/share/ruby- rvm/gems/ruby-1.8.4-tv1_8_4:/usr/share/ruby-rvm/gems/ruby-1.8.4-tv1_8_4@global" GEM_HOME="/usr/share/ruby-rvm/gems/ruby-1.8.4-tv1_8_4" "/usr/share/ruby- rvm/rubies/ruby-1.8.4-tv1_8_4/bin/ruby" "/usr/share/ruby-rvm/src/rubygems- 1.3.7/setup.rb"', please read /usr/share/ruby-rvm/log/ruby-1.8.4- tv1_8_4/rubygems.install.log* Please give me a hint.

    Read the article

  • Does liquid cooling mean I no longer need to dust my machine?

    - by Starkers
    I've got two long haired cats, a dog and I live with a smoker. I use my computer pretty much all day everday, and though I put it to sleep in the night, those fans are constantly going during the day. In just 6 months the rear fans have so much hair wrapped around them it looks more like something from a vacuum cleaner rather than electronic equipment! Due to this, I'm interested in liquid cooling. However, it appears that liquid cooled systems have fans and liquid cooling? Are those just hybrid solutions? They wouldn't really help my situation. There does appear to be fanless systems that use a radiator to dissipate heat. If I implemented one of these could I seal up the vents on my PC and never have to dust it again? Is there a disadvantage to fanless liquid cooling? I don't need to overclock at the moment but I ever want to push my components will fanless liquid cooling be pretty rubbish?

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Ubuntu / FileZilla FTP Set Up

    - by Dean Mark Anthony
    I have successfully installed VSFTPD and I have configured it to allow FTP connection from my Windows laptop running FileZilla. FileZilla is pointing at the directory srv/ftp on my server; the files that are stored in srv/ftp are showing in the Remote Site section of FileZilla. This indicates that it is connecting to my FTP server. What I need to do is point FileZilla toward the directory var/www so that all files that I transfer are visible on my self-hosted website. I cannot find out how to do this. Do I need to perform some updates to the vsftpd.conf file, and if so what do I change? Do I need to perform updates to my FileZilla console? I am comfortable in terms of changing .conf files, port forwarding and setting permissions etc. I just need to know what precisely needs to be changed. Much love...

    Read the article

  • Blogging After The Blog Boom

    - by Tim Murphy
    I have been blogging on Geeks With Blogs since 2005 and on other blogging sites before that.  In this age of Twitter, Facebook and G+ it feels like we are in the post-blog age and yet here I continue.  There are several reasons for this.  The first is that I still find it to be the best place for self publishing long form thought that won’t fit well on Twitter or Facebook.  Google+ allows for this type of content, but it suffers from the same scroll factor as the other social media platforms.  If you aren’t looking at the right moment you miss it.  On a blog I can put complete thoughts with examples and people can find what they want via key words or search engine. The second reason I blog is to have a place for me to put information I want to be able to reference back to later.  Although I use OneNote which is now accessible everywhere the blog gives me somewhere to refer co-workers and clients when I have solutions for problems I have previously solved. I know that other people use their blog as a resume builder, but that hasn’t been one of my primary concerns.  Don’t get me wrong.  Opportunities do come up because you put out well thought out, topical material.  That just isn’t one of my top motivators. I don’t always find the time to blog or even have anything to say lately, but I will continue to produce content for myself and others to learn from and hopefully enjoy. del.icio.us Tags: Blogging,Social Media

    Read the article

  • Fix Google Reader Lag by Blocking Google Plus Button

    - by Jason Fitzpatrick
    Chrome: Many Google Reader fans have noticed, since the upgrades last month, that the service is unbearably slow. Speed things up by blocking the Google Plus button. Ever since the upgrade from the old Google Reader interface to the new integrated-with-Google-Plus interface, many Google Reader users were reporting a painfully long lag between reading entries in Reader. Previously hitting a keyboard shortcut or arrow button to move you through the new stories was instant with no noticeable lag. After the upgrade a lag of 3-5 seconds per individual story became common (we experienced this annoying lag around the How-To Geek office immediately after the upgrade). One of the theories was that the addition of the Google Plus button to every article was causing memory issues. Geeks Are Sexy tested the theory by blocking this address: plusone.google.com/u/0/_/+1/fastbutton using AdBlock. While people were reporting great success with that move (and you may find it works great too) we didn’t have any luck. What did work for us was installing Chromeblock and, while visiting reader.google.com, clicking on the ChromeBlock toolbar button and blocking Google +1. After that the 3-5 second lag vanished and browsing articles was as snappy as it had been. Hit up the link below to grab a copy of Chromeblock. Amazon’s New Kindle Fire Tablet: the How-To Geek Review HTG Explains: How Hackers Take Over Web Sites with SQL Injection / DDoS Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed

    Read the article

  • Is this information about me as a programmer concise and good enough?

    - by Nick Rosencrantz
    I not only want you to review my resume but please tell me what you think Google means when they answered me: "We don't look at personal letters and we like your resume and we can recommend you internally but we need measurable experience. What is meant with "measurable" here? Do they mean like O(1) compared to O(n), selling an entire company, grades or what? This is what I sent: Curriculum vitae Nick Rosencrantz Competence: System development, web development Technical competence: Java, Javascript, HTML, XML, CSS, AJAX, PHP, SQL, Python Employments: 2012- Mobile Innovation AB System Developer IT consultant (Java programmer) 2011-2012 Bnano International Ltd System Developer Python programming in Google App Engine 2008-2009 Sweden Island AB System Developer Programming C++ and Java EE components 2003-2007 Studies Stockholm School of Economics During studies worked as network technician at Effnet AB 2000-2002 Jadestone AB System Developer System development in Java/J2EE. In 2001: KTH, Assistant. Teaching application server programming in Java Enterprise + weblogic + Informix. 1999-2000 Studies KTH 1996-1998 Spray.se System development, Researcher 1995-1995 Finance broker Backoffice work with financial instruments 1993-1994 Computer & Audio-Technical Systems AB Programming, sommer job Education/Courses: Stockholm School of Economics, Master of Science diploma, KTH, Computer Science undergraduate studies Languages Swedish, English, also some German and French Born 1973, Swedish citizen I also have a project-based CS which is several pages long but the above is about what I was aiming for in the beginning when I was looking for a job, now I have employment as an IT consultant in central Stockholm and I want to make my resume concise and also know what Google meant with their answer (It was a Swedish Google employee that via linkedin recruited from my Stockholm School of Economics groups since that is a small elite economics school where I took my M.Sc. and KTH is one of the largest universities in northern Europe so I sent her a link with my CV and she said she could promote me internally if I added "measurable experience" and I've been thinking for weeks what that may mean?

    Read the article

  • Apple MagicMouse randomly loses connection

    - by Yuval
    My Magic Mouse periodically randomly loses connection to my macbook, sitting approximately a foot and a half away from it. There are no software updates available for Snow Leopard (10.6.3). I have a bluetooth keyboard (that sits a little closer to the macbook, though I'm not sure this is the issue) that rarely suffers from this problem. Google results mostly suggest to update the software but I have the most updated version of it already. Does anybody have any idea what could be causing the issue? Is this simply a defective mouse? Additional info: I tried replacing the batteries with new ones, making sure they are sitting tightly in their position with no change in behavior (still disconnects periodically). Also, this doesn't seem to be a problem that the mouse is turning itself off to save battery - it does not reconnect when I move or click it. If I wait long enough (a couple of minutes) it reconnects on its own. Otherwise I have to go to the bluetooth menu with my macbook touchpad and choose "yuval's mouse - Connect"

    Read the article

  • Implementation details of database synchronisation API

    - by Daniel
    I want to achieve a database synchronisation between my server database and a client application. The server would run MySQL and the applications may run different database technologies, their implementation isn't important. I have a MySQL database online and web accessible via an API I wrote in PHP (just a detail). My client application ships with a copy of the online data. As time passes my goal is to check for any changes in the online database and make these updates available to the client app via an API call, by sending a date to an API endpoint corresponding to the last date the app was updated, the response would be a JSON filled with all new objects and updated objects, and delete IDs, this makes possible to update the local store appropriately. Essentially I want to do this: http://dbconvert.com/synchronization.php My question is about the implementation details. Would I need to add a column to my database tables with a "last modified" date? Since the client app could be very out of date if it's been offline for a long time, does that also mean I shouldn't delete data from the online database but instead have another column called "delete" set to 1 and a modified date updated appropriately? Would my SQL query simply check for all data with a modified date superior then the date passed into the API request by the client? I feel like there's a lot more to it then having a ton of dates everywhere. And also, worry that I will need to persist a lot of old data in order to ensure that old versions of the client app always have the opportunity to delete parts of their data when they are able to sync.

    Read the article

  • Disk failing on dell mini, are there diagnostic tools?

    - by John Lawrence Aspden
    Hi, my dell mini 10v running Ubuntu 10.10 runs ok for hours, but then will suddenly slow down drastically. Switching to a console shows lots of error messages, which also get into /var/log/syslog. These errors happen every couple of seconds. I'm figuring that the disk is failing, but is there anyway to be sure, and can laptop disks be replaced easily? Mar 20 11:08:12 dell-mini kernel: [ 2476.378774] ata1.00: status: { DRDY ERR } Mar 20 11:08:12 dell-mini kernel: [ 2476.378785] ata1.00: error: { UNC } Mar 20 11:08:12 dell-mini kernel: [ 2476.449841] ata1.00: configured for UDMA/133 Mar 20 11:08:12 dell-mini kernel: [ 2476.449887] ata1: EH complete Mar 20 11:08:14 dell-mini kernel: [ 2478.777754] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Mar 20 11:08:14 dell-mini kernel: [ 2478.777976] ata1.00: BMDMA stat 0x24 Mar 20 11:08:14 dell-mini kernel: [ 2478.778059] ata1.00: failed command: READ DMA Mar 20 11:08:14 dell-mini kernel: [ 2478.778162] ata1.00: cmd c8/00:08:43:3b:97/00:00:00:00:00/e0 tag 0 dma 4096 in Mar 20 11:08:14 dell-mini kernel: [ 2478.778166] res 51/40:00:47:3b:97/00:00:00:00:00/00 Emask 0x9 (media error)

    Read the article

  • Announcing: Oracle Database 11g R2 Certification on Oracle Linux 6

    - by Monica Kumar
    Oracle Announces the Certification of the Oracle Database on Oracle Linux 6 and Red Hat Enterprise Linux 6 Yesterday we announced the certification of Oracle Database 11g R2 with Oracle Linux 6 and Unbreakable Enterprise Kernel. Here are the key highlights: Oracle Database 11g Release 2 (R2) and Oracle Fusion Middleware 11g Release 1 (R1) are immediately available on Oracle Linux 6 with the Unbreakable Enterprise Kernel. Oracle Database 11g R2 and Oracle Fusion Middleware 11g R1 will be available on Red Hat Enterprise Linux 6 (RHEL6) and Oracle Linux 6 with the Red Hat Compatible Kernel in 90 days. Oracle offers direct Linux support to customers running RHEL6, Oracle Linux 6, or a combination of both. Oracle Linux will continue to maintain compatibility with Red Hat Linux. Read the full press release. 

    Read the article

  • Events and objects being skipped in GameMaker

    - by skeletalmonkey
    Update: Turns out it's not an issue with this code (or at least not entirely). Somehow the objects I use for keylogging and player automation (basic ai that plays the game) are being 'skipped' or not loaded about half the time. These are invisible objects in a room that have basic effects such are simulating button presses, or logging them. I don't know how to better explain this problem without putting up all my code, so unless someone has heard of this issue I guess I'll be banging my head against the desk for a bit /Update I've been continuing work on modifying Spelunky, but I've run into a pretty major issue with GameMaker, which I hope is me just doing something wrong. I have the code below, which is supposed to write log files named sequentially. It's placed in a End Room event such that when a player finishes a level, it'll write all their keypress's to file. The problem is that it randomly skips files, and when it reaches about 30 logs it stops creating any new files. var file_name; file_count = 4; file_name = file_find_first("logs/*.txt", 0); while (file_name != "") { file_count += 1; file_name = file_find_next(); } file_find_close(); file = file_text_open_write("logs/log" + string(file_count) + ".txt"); for(i = 0; i < ds_list_size(keyCodes); i += 1) { file_text_write_string(file, string(ds_list_find_value(keyCodes, i))); file_text_write_string(file, " "); file_text_write_string(file, string(ds_list_find_value(keyTimes, i))); file_text_writeln(file); } file_text_close(file); My best guess is that the first counting loop is taking too long and the whole thing is getting dropped? Also, if anyone can tell me of a better way to have sequentially numbered log files that would also be great. Log files have to continue counting over multiple start/stops of the game.

    Read the article

  • iPhoto fails to see my new iPhone 3Gs

    - by Alexey Kulikov
    Two weeks ago I have bought the new iPhone 3Gs. It synced like charm with iTunes and I was happy until a couple of days ago when I noticed that there was absolutely no way I could get any photos from the phone to my mac, as it doesn't even appear in the device list. Long story short — i have even tried resetting Leopard completely from scratch (and formatting the HD beforehand). Still no joy. Leopard Snow 10.6.1 absolutely clean install with all updates iPhone 3Gs 3.1.2 - syncs like charm with iTunes, but NOT iPhoto Mac Book Air Any help will be greatly appreciated! update: It works with my old iphone, why can't the 3Gs work?

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • Handling inconcistent resource availability in Project 2007

    - by Lachlan McDonald
    Afternoon all, I have four resources; a project manager, and three developers. The project manager can work anywhere from 9 to 5pm each day, but only for a total of 10 hours per week. It doesn't matter when he works, as long as he isn't over-allocated 10 hours per week. The developers on the other hand can only work up to 2 hours per day, for a total of 10 hours per week. If they work more than 2 hours in a day, they are over-allocated. How do I best configure Project to handle this kind of scheduling requirement?

    Read the article

  • Integrating different branches from external sources into a single Mercurial repository

    - by dukeofgaming
    I'm currently working in a company using Perforce and am making way for distributed version control with Mercurial. I've had success importing Perforce history using the perfarce (quite a suitable name, I laugh every time I see/say it) however, this only works with a single branch at a time. Here's how my P4 integration setup works: In perforce, create a "client", which is kind of a description of what you will be constantly updating/checking-out. This can only address one branch at a time (trunk or other). Once you do this, run hg clone p4://<server>/<client_name> Go to .hg/hgrc and put the perforce path line: perforce = p4://<server>/<client_name> Work normally with the code under mercurial, do hg pull perforce to sync up, hg push to export a changelist What I'd like to be able to do is have a perforce path per branch and have everything work in the same repository. Now, pushing is not a problem, however, if I pull the history from another branch it would end up at the default branch. I'd like to be able to do something like hg pull perforce-R5 and have it land in mercurial's R5 branch. Even if I have no merging history, it would be sweet enough to be able to preserve it. There are also other plugins for CVCSs that let you integrate mercurial, but AFAIK the subversion one has the same problem. I don't think there is a straight-through way of doing this, but as long as I could automate the process with some hooks and scripts in a single Mercurial machine, that would be good enough.

    Read the article

< Previous Page | 782 783 784 785 786 787 788 789 790 791 792 793  | Next Page >