Search Results

Search found 38176 results on 1528 pages for 'synchronize files'.

Page 688/1528 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • "File" exists or not

    - by SnailTang
    ls -il ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access éaj/p+st.ó·e: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory ls: cannot access é@j/p¦ft.¦·N: No such file or directory total 55456 ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? éaj/p+st.ó·e ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N ? -????????? ? ? ? ? ? é@j/p¦ft.¦·N and when i use to show these files, i get the info: p+st.ó·e p¦ft.¦·N Please, where do these files or somethings others exist. Or what makes them show here.

    Read the article

  • Preview and Purchase Ebooks with Kindle for PC

    - by Matthew Guay
    Want to look over a new book, or buy it immediately in ebook format?  Here’s how you can preview and purchase most new books from your PC the easy way. Most new books, including almost all New York Times Bestsellers, are available in ebook format from Amazon’s Kindle store.  The Kindle store also includes numerous free ebooks, including out-of-print classics and a surprising amount of recent books.  With the free Kindle for PC reader, you can read any of these ebooks without having to purchase a Kindle device. Preview Ebooks Before you Purchase Sometimes, it can be hard to know if you want to purchase a new book without reading some of it first.  With Kindle for PC, however, you can download a sample of any ebook available for free.  The sample usually includes the table of contents, forward or introduction, and often part or all of the first chapter. To get an ebook sample, find the book you want in the Kindle store (link below). Now, under the Try it free box, select the correct computer or device to send the sample to, and click Send Sample now. Amazon will thank you for your order, even though this is only a free preview.  Click the Go to Kindle for PC button to open Kindle and read your ebook preview.   Or, if Kindle is already running, press the Refresh button in the top right corner to check for new ebooks and previews. Kindle will synchronize and download the previews you selected. The most recently downloaded items show up on the top left.  All sample books have a red “Sample” bar on the bottom of their cover, and they also include links to Buy or view more info about it on it’s cover.  Double-click your sample to start reading it. Your ebook sample will usually open at the introduction or beginning of the first chapter, but you can also view the index, cover, and more. When you reach the end of the sample book, you can click a link to buy the book or view more details about it.  Strangely, both of these links currently take you to the ebook’s page on Amazon.com, but perhaps in the future the Buy link will directly let you purchase the book. Or, you can also click Buy Now on a sample book directly from your Kindle library. If you clicked one of these links, you will be returned to the ebook’s page on Amazon.  Choose the PC or Kindle you want the book delivered to, and this time, select Buy Now with 1-Click. Add your payment info if you’re not already setup for 1-Click Shopping, and then you’ll be shown the same Thank you page as before.  Refresh Kindle for PC, and your new ebook will automatically download.  Strangely, the sample ebook is not automatically removed, so you can right-click on the sample and select Delete this Book.  Additionally, your last-read page in the sample is not synced to the purchased book, so you may have to find your place again. Now, enjoy your full ebook! Download Free Books for Kindle The Kindle Store has an amazing amount of free ebooks.  Some free books may only be free for a limited time as a promotion, while others, such as old classics, may always be free.  Either which way, once you download it, you can keep it forever. When you find a free ebook you want, select the Kindle or PC you want to download it to and click “Buy now with 1-Click”.  Notice that this book shows it’s price is $0.00, but the button still says Buy now.  Rest assured, if the book’s price show up as $0.00, you will not be charged anything for downloading it. Your ebook will download as usual after your next refresh.  Note that you can still download the sample first if you want, but since the book is free, just download the whole thing and delete it if you don’t want it. Redownload your Purchased or Free Books If you install Kindle on a new PC or delete a book from your library, you can always re-download it from your Amazon account.  Browse to the Manage your Kindle page on Amazon (link below) sign in with your Amazon account, and scroll down to the list of your purchased content. Select the book you wish to download, then choose the Kindle or PC you want to download it to and press Go. Note: There is a “Delete this title” button right below this.  If you press the Delete button, you will not ever be able to re-download it. Or, you can download the book directly from the Archived Items tab in Kindle on your other PC. And, if you have your Kindle content on multiple computers, your reading will be synced via Whispersync.  You can start reading on your desktop, and then resume where you left off from your laptop. Conclusion With these tips and tricks, it is much easier to preview and purchase new books, find and download free ebooks, and re-download any you’ve deleted from your PC.  Have fun filling up your digital library! Links Manage your Kindle account Similar Articles Productive Geek Tips Read Mobi eBooks on Kindle for PCRead Kindle Books On Your Computer with Kindle for PCHow to See Where a TinyUrl Is Really Linking ToEdit Microsoft Word 2007 Documents in Print PreviewWhy Can’t I Turn the Details/Preview Panes On or Off in Windows Vista Explorer? TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides

    Read the article

  • MySQL binlogs seems incomplete?

    - by warl0ck
    I created a Database, a table and inserted some data, and found this binlog.0000001 in my log folder, but when I do mysqlbinlog binlog.0000001, it only shows stuff below, seems incomplete: (There's only two files in the log dir: binlog.000001 binlog.index) /*!40019 SET @@session.max_insert_delayed_threads=0*/; /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/; DELIMITER /*!*/; # at 4 #120924 21:12:56 server id 1 end_log_pos 107 Start: binlog v 4, server v 5.5.24-0ubuntu0.12.04.1-log created 120924 21:12:56 at startup # Warning: this binlog is either in use or was not closed properly. ROLLBACK/*!*/; BINLOG ' GAVhUA8BAAAAZwAAAGsAAAABAAQANS41LjI0LTB1YnVudHUwLjEyLjA0LjEtbG9nAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAYBWFQEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA== '/*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; If this warning was the cause: Warning: this binlog is either in use or was not closed properly.. How do I force close the log? EDIT After flush logs command, I see "0 rows" affected, and a few new files, binlog.000001 binlog.000002 binlog.000003 binlog.000004 binlog.index, the contents are nearly the same as binlog.000001. Now I dropped the database, and try restore it with mysqlbinlog binlog.0* | mysql -u root -p, but the database wasn't recovered. EDIT 2 [mysqld] user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking log-bin=/var/log/mysql/binlog binlog-do-db=mydb bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M expire_logs_days = 10 max_binlog_size = 100M P.S /var/log/mysql{.err,.log} are both empty

    Read the article

  • Munin 2 data not showing up on graph

    - by letronje
    I have a fresh installation of Munin 2.0.1 on my Ubuntu 12.04 and the first time I tried to view graphs, it showed them properly(After installation, I had to follow http://munin-monitoring.org/wiki/CgiHowto2 to set it up) After that, the graphs show up, but with with just one data point(single vertical line) as if no data is being collected after I tried it for the first time. In Munin 1.4, there was munin-cron which was run every 5 minutes and I saw new data being plotted in the graph atleast every 5 minutes. But If there is no cron job in v2, How does data collection work with Munin2 ? Is the data collected when the graphs are requested ? The file timestamps in /var/lib/munin have not changed after the first time I tried the graphs. But i do see munin-node process running(restarted in several times). I also see no errors in the munin node log files or apache2 log files. Any idea what could be wrong ? Screenshot : http://i.imgur.com/uzuAK.png Also, is there a way to pre-create graphs instead of doing it dynamically, on the fly ?

    Read the article

  • Free or Open Solution for Storing and Charting CSV data

    - by rrrfusco
    I'm presently storing CSV files, combining them, opening them in open office, creating pivot tables and then generating charts from the spreadsheet. I've looked at OOBase, but appending csv files to base is clunky for some reason. SQLite seems like a good database solution, but I've haven't found a good charting program that connects to it with ease. Although open office (or libreoffice) maintains the references and allows you to update the information, this process is far from efficient. There are too many steps and it seems one program should handle all of these tasks. A better program would be more intuitive, allow you to simply add inserts into a database, and include an interface for standard charting settings. EDIT Simplest Automated Analysis and Chart Generation Tool? The above answer references Spotfire and Tableau, each of which has a free 14 and 30 day trial. Each program is nicely streamlined and designed. I'm looking for a program between this quality and LibreOffice. Can you recommend a better open or free desktop solution for windows?

    Read the article

  • Can't find disk usage in one directory

    - by Xster
    Similar questions are asked frequently but no suggested answers solved my issue. I have some disk space usage that I can't find as well. In df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 144183992 136857180 2652 100% / udev 2013316 4 2013312 1% /dev tmpfs 808848 876 807972 1% /run none 5120 0 5120 0% /run/lock none 2022116 76 2022040 1% /run/shm overflow 1024 0 1024 0% /tmp I checked the inodes, I checked lsof for +L1 or deleted files, I rebooted, I checked for files hidden behind mounts but none of them were the issue. It grows periodically and I'm running out of things to delete to feed the beast. It's all in the home directory of the only user I have. In du in ~ du -h --max-depth=1 192K ./.nv 2.1M ./.gconf 12K ./Pictures 1.6M ./.launchpadlib 12K ./Public 24K ./.TemporaryItems 8.9M ./.cache 12K ./Network Trash Folder 28K ./.vnc 11M ./.AppleDB 48K ./.subversion 1.9G ./.xbmc 8.0K ./.AppleDesktop 12K ./.dbus 81M ./.mozilla 12K ./Music 160K ./.gnome2 44K ./Downloads 692K ./.zsh 236K ./.AppleDouble 64K ./.pulse 4.0K ./.gvfs 1.4M ./.adobe 44K ./.pki 44K ./.compiz-1 168K ./.config 1.4M ./.thumbnails 12K ./Templates 912K ./.gstreamer-0.10 8.0K ./.emacs.d 92K ./Desktop 1.3M ./.local 12K ./Ubuntu One 12K ./Documents 296K ./.fontconfig 12K ./.qt 12K ./.gnome2_private 20K ./.ssh 20K ./.mission-control 12K ./Videos 12K ./Temporary Items 640K ./.macromedia 124G . I can't find a way to figure out how it got to that 124G in that directory. There are no mount points in home.

    Read the article

  • Postgresql base backup script

    - by Terry Lorber
    I'm using the following script to do a file-level backup of Postgresql. I sometimes see that the last part, to do cleanup after "pgs_backup_stop" is called, hangs while it waits for the last WAL to be created. The REF_FILE to search for is sometimes wrong. I'm also shipping these files to a different machine, every 5 minutes via rsync. What do other people do to safely remove old WAL files? #!/bin/bash PGDATA=/usr/local/pgsql/data WAL_ARCHIVE=/usr/local/pgsql/archives PGBACKUP=/usr/local/pgsqlbackup PSQL=/usr/local/pgsql/bin/psql today=`date +%Y%m%d-%H%M%S` label=base_backup_${today} echo "Executing pg_start_backup with label $label in server ... " CP=`$PSQL -q -Upostgres -d template1 -c "SELECT pg_start_backup('$label');" -P tuples_only -P format=unaligned` RVAL=$? echo "Begin CheckPoint is $CP" if [ ${RVAL} -ne 0 ] then echo "PSQL pg_start_backup failed" exit 1; fi echo "pg_start_backup executed successfully" echo "TAR begins ... " pushd $PGBACKUP tar -cjf pgdata-$today.tar.bz2 --exclude='pg_xlog' $PGDATA/* popd echo "TAR completed" echo "Executing pg_stop_backup in server ... " $PSQL -Upostgres template1 -c "SELECT pg_stop_backup();" if [ $? -ne 0 ] then echo "PSQL pg_stop_backup failed" exit 1; fi echo "pg_stop_backup done successfully" TO_SEARCH="*${CP:0:2}000000${CP:3:2}.00${CP:5}" echo "Check for ${WAL_ARCHIVE}/${TO_SEARCH}.backup" while [ ! -e ${WAL_ARCHIVE}/${TO_SEARCH}.backup ]; do echo "Waiting for ${WAL_ARCHIVE}/${TO_SEARCH}.backup" sleep 1 done REF_FILE="`echo ${WAL_ARCHIVE}/*${CP:0:2}000000${CP:3:2}`" echo "Reference file ${REF_FILE}" # "-not -newer" or "\! -newer" will also return REF_FILE # so you have to grep it out and use xargs; otherwise you # could also use the -delete action find ${WAL_ARCHIVE} -not -newer ${REF_FILE} -type f | grep -v "^${REF_FILE}$" | xargs rm -f REF_FILE="`echo ${PGBACKUP}/pgdata-$today.tar.bz2`" echo "Reference file ${REF_FILE}" find $PGBACKUP -not -newer ${REF_FILE} -type f -name pgdata* | grep -v "^${REF_FILE}$" | xargs rm -f

    Read the article

  • How to utilize 4TB HDD, which is showing up as 2.72TB

    - by mason
    I have two internal HDD's. They're both 4TB capacity. They're both formatted with the GPT partitioning scheme, and they're Basic Discs (not dynamic). I'm on Windows 8 64bit. I have UEFI, not BIOS. When I view the discs in Computer Management MMC with Disk Management, they show that each partition is formatted as NTFS and takes up the entire drive. And it shows that each drive has a capacity of 3725.90GB in the bottom section of Disk Management, but 2.794.39GB in the top section. When I view the discs in "My Computer"/"This PC" they only show up as 2.72TB, which matches the amount capacity I'm getting from some other 3TB HDD's I have. Why are they showing up as only 2.72GB? Will I be able to use the full 4TB capacity? Also of note, although I'm not sure it's relevant: I often get corrupted files on these two HDD's. None of my other HDD's give me corrupted files. Usually the problem is fixed by running chkdsk /f on the drives, but it's extremely annoying. In the picture below, it's the X: and Y: drives. Steps I've tried Flashed latest BIOS (MSI J.90 to K.30)

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • deploy LAMP config to new boxes with low/no effort

    - by user1444233
    I'm spending a lot of time setting up new Centos 6 instances. I use a VCS (Subversion) for most of the config files and all of the webapp source files (Github), but even with excellent package managers (like yum, npm, easy_install, etc.) it still takes time. I'd like to get to the point where I could try out a new potential web host by just signing up for an account, logging in and automatically sucking my standardised config onto the box. I know there are a set of tools that can help: Puppet Chef Vagrant and a set of services that sell solutions: [Jumpbox] http://www.jumpbox.com/ [BitNami Cloud] http://bitnami.org/cloud I don't mind investing time in learning a new tool, but as a no-budget start-up, I'm keen to keep monthly costs down. My biggest concern is that time spent on the server config is time away from the codebase, and that's where I think my team and I should be investing our energy, at least until we get funded and scale up a bit. I'd be grateful of some recommendations for which way to jump on config: stick with SSH and manual deploys, at least until you get big. bite the bullet and learn [say] puppet. You may only use it 8-10 times, but it pays to have such an easy tunable server bootstrap. don't bother, just pay the $100/month for a standard config service. It'll cost you $1000/year, but you should focus on the code. Other questions in this domain I use quite a complex stack (Drupal, Zend Server, MySQL, PHP, MongoDB, Python, django), but are there standard(ish) setups that include these or that I could build upon more quickly? Are the configs optimised for small, medium, large VPS (1GB, 4GB, 16GB)? How secure are they?

    Read the article

  • Change XRDP keyboard layout to en-gb Ubuntu 12.04

    - by Earl Sven
    Does anybody know how to change the keyboard layout to en-gb in an XRDP session on Ubuntu 12.04? I am using mstsc.exe to connect to an XRDP server hosting an XVNC session, however I cannot work out how to apply the UK keyboard layout. A bit of googling has yeilded these instructions which allow me to change the keymap, however using the keymap file I downloaded from here I loose the ability to use the arrow keys, home/end etc. Comparing the file with the standard one there are substantially more differences than I would expect considering the similarity between the layouts. I only have RDP access to the box so i don't seem to be able to actually generate a new layout per the instructions above, maybe it's a local console thing? Also I can't change either the RDP client used or the RDP server as they are my only access to the system, I don't have local console access. I do have root priveleges on the OS however. Any thoughts? Edit: I have found http:// xrdp.sourceforge.net/documents/keymap/newkeymap.html (apologies for not typing the link properly but the antispam filter won't let me post more than 2 links) this documentation on the XRDP sourceforge page which describes keymap file format. It indicates the values in the keymap files are unicode 0x64 etc, however the files I have already on my system seem to use a different format 0:0 or 65307:27 etc, does anybody know what the difference is?

    Read the article

  • SQL SERVER – Planned and Unplanned Availablity Group Failovers – Notes from the Field #031

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused. In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to Planned and Unplanned Availablity Group Failovers. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words. Whenever a disaster occurs it will be a stressful scenario regardless of how small or big the disaster is. This gets multiplied when it is your first time working with newer technology or the first time you are going through a disaster without a proper run book. Today, were going to help you establish a run book for creating a planned failover with availability groups. To make today’s session simple were going to have two instances of SQL Server 2012 included in an availability group and walk through the steps of doing an unplanned failover.  We will focus on using the user interface and T-SQL to complete the failovers. We are going to use a two replica Availability Group where each replica is in another location. Therefore, we will be covering Asynchronous (non automatic failover) the following is a breakdown of our availability group utilized today. Seeing the following screen might be scary the first time you come across an unplanned failover.  It looks like our test database used in this Availability Group is not functional and it currently isn’t. The database status is not synchronizing which makes sense because the primary replica went down so it couldn’t synchronize. With that said, we can still failover and make it functional while we troubleshoot why we lost our primary replica. To start we are going to right click on the availability group that needs to be restarted and select failover. This will bring up the following wizard, which will walk you through several steps needed to complete the failover using the graphical user interface provided with SQL Server Management Studio (SSMS). You are going to see warning messages simply because we are in Asynchronous commit mode and can not guarantee ‘no data loss’ when we do failover. Just incase you missed it; you get another screen warning you about potential data loss because we are in Asynchronous mode. Next we get to connect to the specific replica we want to become the primary replica after the failover occurs. In our case, we only have two replicas so this is trivial. In order to failover, it’s required to connect to the replica that will become primary.  The following screen shows that the connection has been made successfully. Next, you will see the final summary screen. Once again, this reminds you that the failover action will cause data loss as were using Asynchronous commit mode due to the distance between instances used for disaster recovery. Finally, once the failover is completed you will see the following screen. If you followed along this long you might be wondering what T-SQL scripts are generated for clicking through all the sections of the wizard. If you have used Database Mirroring in the past you might be surprised.  It’s not too different, which makes sense because the data is being replicated via SQL Server endpoints just like the good old database mirroring. Now were going to take a look at how to do a failover with just T-SQL. First, were going to need to open a new query window and run our query in SQLCMD mode. Just incase you haven’t used SQLCMD mode before we will show you how to enable it below. Now you can run the following statement. Notice, we connect to the replica we want to become primary after failover and specify to force failover to allow data loss. We can use the following script to failback over when our primary instance comes back online. -- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE. :Connect SQL2012PROD1 ALTER AVAILABILITY GROUP [AGSQL2] FORCE_FAILOVER_ALLOW_DATA_LOSS; GO Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Browser sends http request with RANGE

    - by nute
    I have a local testing environment in a Fedora virtual machine. Strangely, resources (css and js files) don't seem to work. Looking at Firebug, I see that the browser sends the HTTP request with "Range bytes=0-". The server responds with either an empty 200OK or an empty 206 Partial Content. Here is an example: Response Headers Date Mon, 23 Nov 2009 23:33:26 GMT Server Apache/2.2.13 (Fedora) Last-Modified Thu, 19 Nov 2009 22:58:55 GMT Etag "18-3aec-478c14dbee138" Accept-Ranges bytes Content-Length 15084 Content-Range bytes 0-15083/15084 Connection close Content-Type text/css Request Headers Host fedora.test User-Agent Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.5) Gecko/20091105 Fedora/3.5.5-1.fc11 Firefox/3.5.5 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 300 Connection keep-alive Referer http://fedora.test/pictures/ Cookie __utma=26341546.1613992749.1258504422.1258569125.1258752550.4; __utmz=26341546.1258504422.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); PHPSESSID=tqf8jfmc77qihe97rl4tmhq685 Range bytes=0- If-Range "18-3aec-478c14dbee138" I don't know if the browser is sending the wrong request, or if it's the server that is doing this. Request made to the outside (such as google analytics) are working fine. This is running in Fedora 11 in VirtualBox. Apache. PHP. The files are being served through the "shared folders" feature of VirtualBox (could it be related?). No error logs could help me.

    Read the article

  • Why does RoboCopy create a hidden system folder?

    - by Svish
    I thought I would try out RoboCopy for mirroring the contents of a folder to another harddrive. And seems like it worked. But, for some reason, to see the destination folder I have to both enable Show hidden files, folders and drives and disable Hide protected operating system files. Why is this? Both the source and destination folder was initially both visible and normal directories. When I open up the properties for that destination folder, the Hidden attribute is even disabled. What is going on here? Is it because I ran it in an administrator command prompt? Or is it an issue with my choice of modifiers? Or does robocopy really just work this way? robocopy E: I:\E /COPYALL /E /R:0 /MIR /B /ETA Update: Tried to copy another drive to another folder, and I got the same thing happening there. But when I try to just copy a folder to a different folder, then the destination folder stays normal. Could it be because I copy a drive? If so, how can I prevent this from happening? Cause I really do want to copy the whole drive...

    Read the article

  • VMware virtual machine network devices malfunctioning

    - by sheepz
    I'm running Ubuntu 10.04 LTS and VMvware workstation 7.0.1 build-227600. The virtual machine i'm running in VMware is a custom distribution built on Debian Linux version 3.1. I'm still pretty much a beginner with UNIX administration. After having messed around with the vmware (changed only the name of the folder, the vmx and and other .v* files accordingly in which the .vmx was situated, and the configuration in the vmx file accordingly), the network devices on the virtual machine do not work anymore. The virtual machine is used for securely sending messages. The virtual machine: As far as I know, this perl file called proxy-gen-ifalias eth0 is responsible for properly setting up the two virtual network devices eth0 and eth1. The Virtual machine comes with a GUI interface in which I have set up two ethernet network devices, one internal, the other external. Now, after having messed around with this, the UI gives me this error message: perl proxy-gen-ifalias eth0 /etc/modprobe.d/alias-eth0 /sbin/update-modules perl proxy-gen-ifalias eth1 /etc/modprobe.d/alias-eth1 /sbin/update-modules ifdown eth0 ifdown: interface eth0 not configured ifdown eth1 ifdown: interface eth1 not configured perl proxy-gen-netcfg /etc/network/interfaces ifup eth0 SICCSIFADDR: No such device eth0: ERROR while getting interface flags: No such device SIOCSIFNETMASK: No such device eth0: ERROR while getting interface flags: No such device Failed to bring up eth0. ifconfig eth0 eth0: error fetching interface information: Device not found make: *** [/etc/network/interfaces] Error 1 ~ Here are the contents of the two perl files referred to in the message: paste.pocoo.org/show/2AMzAYhoCRZqlGY7wUFk/ proxy-gen-netcfg

    Read the article

  • win8: access denied to external USB disk; update access rights fails

    - by Gerard
    I use to work with 2 laptops (vista and win7), my work being files on an external usb disk. My oldest laptop broke down, so I bought a new one. I had no option other than take win8. 1/ I suspect something changed with access rights, as my external disk suffered some "access denied" problem on win8. I was prompted (by win8) somehow to fix the access rights, which I tried to do, getting to the properties - security. This process was very slow and ended up saying "disk is not ready". Additonnally, the usb somehow was not recognized anymore. 2/ Back to win7, I was warned that my disk needed to be verified, which I did. In this process, some files were lost (most of them i could recover from the folder found00x, but I have some backup anyway). Also, I don't know why, but under win7, all the folder showed with a lock. 3/ Then back again to win8. Same problem : access denied to my disk + no way to change access rights as it gets stuck "disk is not ready". Now I am pretty sure there is some kind of bug or inconsistence in win8 / win7. I did 2/ and 3/ a few times. At some point, I also got an access denied in win7. I could restore access rigths to the disk to "system" (properties - security - EDIT for full control to group "system" ...). But then I still get the same access right pb on win8, and getting stuck in the process to restore full control to "system" -- and "admin" groups. Now, after I tried for more than 3 days, I am losing my patience with that bloody win8 which I did not want to buy but had no choice. I upgraded win8 with the windows updates available. Does not help. Anybody can help me ?

    Read the article

  • Windows 7 - "A disk read error occured. Press Ctrl + Alt + Del to restart"

    - by Senthil
    Problem: When I switch on my PC, after BIOS POST, a cursor is blinking for about 5 seconds and then I am getting this error message: A disk read error occurred. Press Ctrl + Alt + Del to restart. I am able to go into BIOS. But Windows loader doesn't even start. This message is shown after my motherboard logo comes and goes. Symptoms: I DID notice my system freezing for minutes at a time for past two days. Also, in the past two days, it stopped half way through the Window booting process. I had to do hard reset couple of times to get it working. But since today morning, I only get this error message. Configuration: Operating System: Windows 7 Ultimate 32-bit only. Hard disk: 1 Physical Disk - 80GB SATA Partitions: Two (2) - C: and D: File System: NTFS No drive encryption or compression is turned on. After I searched on the net, I have found people mentioning these possible causes: Hard Disk is physically failing Corrupt MBR Bad Sector I am planning to buy a new hard disk, install Windows on it and continue. But I need data from the old hard disk. The data I want is in D: drive, outside any Windows user folder, is not encrypted or compressed or protected in anyway. I think if someone/something can get the disk working again and knows NTFS, the data can be hopefully read. What steps should I follow to recover files from the defective disk? Update: I bought a new disk, installed windows on it and added the defective one as a slave. Then I was able to read the data from the defective hard disk. Though chkdsk found lots of errors, the files I wanted were not affected and I got them back :) I am not using that hard disk anymore though it seems to be working at the moment.

    Read the article

  • Wordpress hacked. Disabled hacked site but bad traffic continues [closed]

    - by tetranz
    Possible Duplicate: My server's been hacked EMERGENCY My Ubuntu 10.04 LTS VPS has been hacked, probably via a WordPress site. I was alerted to it when I noticed the incoming traffic was unusually high. A WordPress site was littered with eval(base64_decode(...)) code in lots of files. My fault, I had some files writeable by www-data which shouldn't have been. I've disabled that site (a2dissite ... and restart Apache). This has reduced it but I am still getting some malware type traffic. My server runs several WordPress and Drupal sites and a home grown PHP site. I have captured traffic with tcpdump and looked at it Wireshark. It's reaching out to the login page of some Joomla sites, trying multiple logins. The traffic stops when I stop Apache. If I a2dissite every site and reload (not restart) Apache the traffic continues. At that point I have no virtual hosts running and no DocumentRoot in my apache2.conf so I don't know how Apache is still running something. I have searched the other sites with grep for likely looking php code with no success. I may have missed it but I haven't found anything suspicious in the Apache logs. I have mod-status running. I haven't really seen anything much there except that someone is still trying to do a POST to the theme page on the disabled WordPress site but they now get a 404. What should I be looking for? Are there any tools or whatever which would give me more info about how Apache is generating that traffic? Thanks

    Read the article

  • Setting up a Pagefile and Partition in Server 2008

    - by Brett Powell
    I am setting up 18 new machines for our company, and I have instructions from my new boss on setting up a Pagefile and Partition. I have looked at their existing machines to base the new setups off of, but there is no consistency between any 2 machines, which has left me extremely frustrated to say the least. My instructions are... 1) Set a static pagefile (use recommended value as max/min), set it on SSD if SSD available. 2) Make 3 partitions: C: is used for OS and install files D: is used for backups on machines with a SSD. On machines without SSD create a D: partition for pagefile (2*installed RAM for partition size) E: must be the partition hosting user files I have never messed with Pagefiles before, and looking at their existing machines is offering no help. My questions are... 1) As the machines I am setting up have no SSD (just 2 SATA drives) does it sound like the Pagefile should be setup on the C: (primary) drive or the D:? The instructions are vague so I have no idea. 2) As C: and D: are both Physical drives, does it sound like C: should be partitioned out to create the E: drive or D:? Thanks for any help I can get. I am extremely stressed out under a massive workload right now, and these vague instructions are quite infuriating.

    Read the article

  • Is my HDD dead forever?

    - by Roberto
    Yesterday I turned on my computer and it couldn't boot. I found out the hd (320GB SATA Seagate Momentus 7200.3 for notebook) was broken and it couldn't be recognized by the BIOS. I have another of the same hard drive, so I exchanged the boards. I found out that there is a problem on its board since my good hard drive didn't work. But the broken hard drive doesn't work with the good board as well: it can be recognized but when I insert a Windows Instalation DVD it says the hard drive is 0GB. I put it in a case and use it in another computer via USB, and but it doesn't show up in the "My Computer". I used a software to recover files called "GetDataBack for NTFS", it recognized the hard drive but with the wrong size (2TB). I try to make it read the hard drive but it got an I/O error reading sector. It tries to read, the hard drive spins up. So, since I'm using a good board on it, the problem seems to be internal. Is there anything someone could do to recover the files from it?

    Read the article

  • Unix Permissions issue with users belonging to the same group accessing a folder

    - by TK Kocheran
    I have a folder I'd really like to allow another user on this machine access to. I'm using mt-daapd to serve music to the network, so I'd like to enable the mt-daapd user to access my Music directory, /home/rfkrocktk/Music. The master user is rfkrocktk obviously. I've tried to set all of my permissions properly on the directory, but the mt-daapd user can't acces the files. I created a group called media-users and added both rfkrocktk and mt-daapd to it in order to give mt-daapd permission to simply read all of the files in that directory and subdirectories. If I run id on each of my users, here's what's displayed: $ id rfkrocktk > uid=1000(rfkrocktk) gid=1000(rfkrocktk) groups=1000(rfkrocktk),4(adm),20(dialout),24(cdrom),29(audio),46(plugdev),104(lpadmin),115(admin),120(sambashare),124(vboxusers),1001(jupiter),2002(media-users) $ id mt-daapd > uid=123(mt-daapd) gid=65534(nogroup) groups=65534(nogroup),2002(media-users) It definitely seems that both users are a part of the media-users group, so what could be going wrong? If I run ls -l on the actual Music directory to see its permissions, here's the output: drwxr-Sr-- 201 rfkrocktk media-users 12288 2011-01-13 12:26 Music If I run ls -l on the Music directory to get its children, here's the output: drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-12-20 15:31 2DBoy drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-05-25 12:50 ABBA drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Access Denied drwxr-Sr-- 10 rfkrocktk media-users 4096 2009-12-28 15:19 AC-DC drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Aerosmith drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-04 10:45 A Flock of Seagulls drwxr-Sr-- 4 rfkrocktk media-users 4096 2010-05-28 18:13 Alestorm drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-22 23:29 Amon Amarth drwxr-Sr-- 5 rfkrocktk media-users 4096 2009-12-28 15:19 Anberlin ... From this, it would seem that I should be able to access the folders from mt-daapd, but I can't. Running sudo -i -u mt-daapd ls -l /home/rfkrocktk/Music displays nothing, indicating to me that for whatever reason, mt-daapd doesn't have access to read the folder. What am I doing wrong?

    Read the article

  • Plesk + Apache + PHP (FastCGI): Constant session permissions problems, conflicts between HTTP / HTTPS

    - by Hans Engel
    I've just moved a collection of sites over to a brand-new server, running Apache 2.2.3, PHP 5.3, and Plesk 10.1.1. I am having problems with file permissions on PHP sessions, which are being stored in /var/lib/php/session. I originally set the permissions like so for this folder: drwxrwx--- 2 apache psacln 8192 Mar 22 23:25 session This worked fine, for HTTP sessions. Files were being saved in that folder with these permissions: -rw------- 1 client1 psacln 0 Mar 22 23:24 sess_507... -rw------- 1 client2 psacln 0 Mar 22 23:25 sess_8o1... The problem, however, is that PHP scripts accessed via HTTPS do not seem to be run by the same client1 or client2 user. I deleted files in the session directory and accessed a login page via HTTPS to see how sessions were being saved when initiated via this protocol: -rw------- 1 apache apache 0 Mar 22 23:25 sess_507... So, for whatever reason, sessions initiated by clients browsing with HTTPS were being saved by apache:apache, while sessions from HTTP clients were saved with someclient:psacln. What I'd like to ask: How can I avoid this problem with session permissions? When sessions are created via unencrypted HTTP and a client visits an HTTPS portion of the site, permission errors are shown, since apache:apache tries to access the session save created by someclient:psacln. The converse is also true. Can I change the user which runs the Apache HTTPS server, via Plesk or the command line? If not, can I have PHP sessions save with rw-rw---- permissions, and then add apache to the psacln group? Any other suggestions on how to fix this issue?

    Read the article

  • CouchDB crashes at startup when path to config file has space(s)

    - by Barry Wark
    I'm hoping to run CouchDB as a per-user Launch Agent on OS X. I'm using the coucdbx-core folder from the CouchDB Server.app as the base of my CouchDB deployment. I'd like each user to have their own couch instance (on a different port), necessitating separate config files for each instance. The logical place to put these files is in ~/Library/Application Support/ for each user. I can put the entire distribution in ~/Library/Application Support/my-app/coucdbx, and put the .ini at ~/Library/Application Support/my-app/local.ini. Starting couchdb as bin/couchdb -a ../local.ini (from ~/Library/Application Support/my-app/coucdbx) works great. But I'd like to save every user the ~50MB couchdbx and install the couchdbx-core in a shared location (e.g. within my app's .app bundle). When I do this, the path to the per-user config file contains a space, and I get the following error when starting CouchDB: $ bin/couchdb -n -a ~/Library/Application\ Support/us.physion.ovation/default.ini {"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/Users/hs/prj/build-couchdb/build/etc/couchdb/default.ini","/Users/hs/prj/build-couchdb/build/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{error,enoent}}},[{couch_server_sup,start_server,1,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch_server_sup.erl"},{line,56}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,274}]}]}}}}}},[{couch,start,0,[{file,"/Users/hs/prj/build-couchdb/dependencies/couchdb/src/couchdb/couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}} Is there any way to provide a config file at the command line, if that config file's path includes space(s)? Despite my best efforts in the mailing list archives, wiki and google, I haven't been able to find a solution or a definitive "it can't work". Any help greatly appreciated.

    Read the article

  • virtualbox snapshot size

    - by intuited
    I've started using Windows 7 under VirtualBox on an Ubuntu 10.10 host. I took about 6 snapshots over the course of setting up the VM from the Windows restore image that came with the computer. My installations were more or less limited to windows updates, antivirus, and the VB Guest Additions. I uninstalled much more than I installed. The VM was running for about 24 hours total. The snapshots increased in size at a worrisome rate, even when the machine was idle: the snapshot .vdi file for the period between 11:22 PM and 9:02 AM is 6 gigs in size; during that time very little happened. The other .vdi files are between 0.5 and 3 GB, most between 1 and 2 GB. The corresponding .sav files are between 0.5 and 1 GB. The Internet connection where I was doing this is limited to 30KB/s download, which, constantly saturated, works out to less than 3 GB per 24 hour period. Is this normal? Is there something that can be done to make snapshots more practical? update On starting up the VM again, I've noticed that mscorsvw is using significant processing time. Apparently this process [precompiles .NET assemblies]. This may have been going on during the period when I was taking snapshots, which might explain some of the snapshot size increase. I would be somewhat surprised to learn that this could be responsible for over 10 GB of additional disk usage, or that it would run for roughly 24 hours. Is this possible?

    Read the article

  • How can I explain to dspam that the user "brandon" is the same as "brandon@mydomain"

    - by Brandon Craig Rhodes
    I am using dspam for spam filtering by running the "dspamd" daemon under Ubuntu 9.10 and then setting up a Postfix rule that says: smtpd_recipient_restrictions = ... check_client_access pcre:/etc/postfix/dspam_everything ... where that PCRE map looks like this: /./ FILTER lmtp:[127.0.0.1]:11124 This works well, and means that all users on my system get all of their email, whether "dspam" thinks it is innocent or not, and have the option of filtering on its decisions or ignoring them. The problem comes when I want to train dspam using my email archives. After reading about the "dspam" command, I tried this on the files in my Inbox and spam boxes (which date from when I was using another filtering solution): for file in Mail/Inbox/*; do cat $file | dspam --class=innocent --source=corpus; done for file in Mail/spam/*; do cat $file | dspam --class=spam --source=corpus; done The symptom I noticed after doing all of this was that dspam was horrible at classifying spam — it couldn't find any! The problem, when I tracked it down, was that I was training the user "brandon" with the above commands, but the incoming email was instead compared against the username "brandon@mydomain", so it was running against a completely empty training database! So, what can I do to make the above commands actually train my fully-qualified email address rather than my bare username? I would like to avoid having to run "dspam" as root with a "--user" option. I would have expected that the "dspam" configuration files would have had an "append_domain" attribute or something with which to decorate local usernames with an appropriate email domain, but I can't find any such thing. When I used to use the Berkeley DB backend to "dspam", I solved this problem by creating a symlink from one of the databases to the other. :-) But that solution eventually died because the BDB backend is not thread-safe, so now I have moved to the PostgreSQL back-end and need a way to solve the problem there. And, no, the table where it keeps usernames has a UNIQUE constraint that prevents me from listing both usernames as mapping to the same ID. :-)

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >