Search Results

Search found 20388 results on 816 pages for 'nvidia current'.

Page 431/816 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Make backups of Dropbox folder every week

    - by ilansch
    I have a Dropbox folder which is shared by couple of users. I would like to make a backup of this folder that will occur every week and store this backup on another hard drive. I can simply copy the entire folder each time and this will be the backup, but I would like to copy only the files that have been changed or created during that week. I thought of creating a batch script that will check each file in the Dropbox folder recursively and see its modified date. If that date is later then a given one (current backup date) it will copy the file to a folder named BackUP[Date]. Do you think this solution is OK?

    Read the article

  • LVM mirroring VS RAID1

    - by syrenity
    Hi. Having learned a bit about LVM mirroring, I thought about replacing the current RAID-1 scheme I'm using to gain some flexibility. Problem is that according to what I found on the Internet, LVM is: 1) Slower then RAID-1, at least in reading (as only single volume being used for reading). 2) Non-reliable on power interrupts, and requires disk cache disabling for prevention of data loss. http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ Also it seems, at least to several setup guides I read (http://www.tcpdump.com/kb/os/linux/lvm-mirroring/intro.html), that one actually requires a 3rd disk for storing the LVM log. This makes the setup completely unusable on 2 disks installations, and lowers the amount of used mirror disks on higher amount of disks. Can anyone comment the above facts, and let me know his experience of using LVM mirroring? Thanks.

    Read the article

  • Build of expect v5.43 fails with Tcl v8.5.8

    - by E Brown
    Hi, I'm trying to build "expect" from source v5.43, using Tcl built from source v8.5.8 on Redhat Linux. Tcl built fine, but my attempt to build expect fails. I run configure, then make, which gives me the error: `TCL_REG_BOSONLY' undeclared when compiling exp_inter.c. I did some digging around, and found the TCL_REG_BOSONLY value defined in Tcl file tclInt.h, but there is no #include for that in the exp_inter.c file. My question is, can "expect" be built from source with Tcl version 8.5.8, or does it require an earlier version? Version 5.43 is the latest for "expect" that I can find, and the current Tcl version is 8.5.8, but something doesn't seem compatible between the two. Any help appreciated.

    Read the article

  • Best method(s) to backup VMs running on HyperV?

    - by Kara Marfia
    We're in the middle of P2V'ing most of the network, so the current backup method is likely the worst - the backup agent is still installed on the guest OSs, and the backup device is dutifully pulling them onto tape, one file at a time. I suspect there's a clever way to script (PowerShell?) a suspend on the VMs, then backup of the .vhd files, and unsuspend the VMs. This seems like it would provide big speed benefits, while losing file-level restore (might be best for things like DCs and app servers). What methods/policies have you hammered out?

    Read the article

  • The cache is getting at full level so fast

    - by CompilingCyborg
    Please, the memory and the cache are getting to the full level quite quickly under my linux mint 9 - isadora system. I used Ubuntu and Debian before, and it was not causing this issue at all. At the current time i typing the following command frequently to empty the cache "echo 3 /proc/sys/vm/drop_caches". Please any way around this? or do you know what's going wrong? | I am only programming on this machine; no graphics, no games nothing. Thanks in advance for your help!

    Read the article

  • In virtualbox, I can't access the dvd drive to install a guest host

    - by user211062
    I have installed a fresh copy of Ubuntu Server 12.04 and VirtualBox 4.3. I have set up a VM called "MediaServer" and tried to start it. I then get the following error: Cannot open host device '/dev/sr0' for readonly access. Check the permissions of that device ('/bin/ls -l /dev/sr0'): Most probably you need to be member of the device group. Make sure that you logout/login after changing the group settings of the current user (VERR_ACCESS_DENIED) I have looked all over the Internet and have been unable to find a solution. Using Webmin, I tried changing the group settings so that my user name was in the "vboxusers" group, but that did not work either. I tried various other changes in group settings and none of them worked. Also, I tried rebooting the server after the changes and that didn't work either. I have been following a guide on how to set up an Ubuntu server from the website "linuxhomeserverguide.com" and when it came to the section where you could finally set up your first virtual machine, I am stumped. I would really appreciate it if someone could help me. Thanks in advance.

    Read the article

  • How should I capture Linux kernel panic stack traces?

    - by Alnitak
    What's current best practice to capture full kernel stack traces on a Linux system (RHEL 5.x, kernel 2.6.18) that occasionally panics in a device driver? I'm used to the "old" SunOS way of doing things - crash dumps get written to swap, and on reboot the dump gets retrieved in the local file system. man 8 crash refers to diskdump, but that appears to be unsupported. and/or deprecated. I've played with kdump, but it's unclear whether I can get a stack trace from that. Triggering a panic via Magic SysRq didn't create one. It also seems wasteful to reserve so much memory (128MB) just for a kexec crash recovery kernel.

    Read the article

  • Caching by in-memory dictionaries. Are we doing it all wrong?

    - by user73983
    This approach is pretty much the accepted way to do anything in our company. A simple example : when a piece of data for a customer is requested from a service, we fetch all the data for that customer(relevant part to the service) and save it in a in-memory dictionary then serve it from there on following requests(we run singleton services). Any update goes to DB, then updates the in memory dictionary. It seems all simple and harmless but as we implement more complicated business rules the cache gets out of sync and we have to deal with hard to find bugs. Sometimes we defer writing to database, keeping new data in cache till then. There are cases when we store millions of rows in memory because the table has many relations to other tables and we need to show aggregate data quickly. All this cache handling is a big part of our codebase and I sense this is not the right way to do it. All of this juggling adds too much noise to the code and it makes it hard to understand the actual business logic. However I don't think we can serve data in a reasonable amount of time if we have to hit the database every time. I am unhappy about the current situation but I don't have a better alternative. My only solution would be to use NHibernate 2nd level cache but I have nearly no experience with it. I know many campanies use Redis or MemCached heavily to gain performance but I have no idea how I would integrate them into our system. I also don't know if they can perform better than in-memory data structures and queries. Are there any alternative approaches that I should look into?

    Read the article

  • linux kernel option to set sata disk to udma/133 1.5gbps

    - by John Doe
    hi, i try to speed up boot time of my linux server box which uses removable HDD rack's the current boot time is around 2 min's but if i connect the hdd's directly to the mainboard its about 2 sec's the problem is that ahci's kernel implementation causes a timeout of around 30 seconds for each disk during boot which originates from the hdd-rack after the timeout the kernel prints that the disk is limited with speed to 1.5gbps and udma/133 is used so the question i have is: how can i set this in grub as a boot option so the kernel doesnt have to wait for a timeout and just hardcoded limits the speed of the disks? i read about a few options like pci=nomsi or such, which dont work thats why im asking for limiting precisely the disks during boot thx

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • Pros and cons of PHP vs C,C++ as language in a programming interview ?

    - by DhruvPathak
    Hi All, Though this is a matter of personal choice and comfort. I would want your views on a situation like this. Programmer A has been working on PHP for some years, and has had prior experience in C.C++ during algorithm courses in university. The current fluency is good is PHP,but C,C++ can also be brushed up. So for interviews with major companies who put lot of emphasis on algorithms and data structures in programming interview e.g. binary trees, linked lists, arrays , strings . What should programmer A do ? Try to implement those things in PHP ( which is generally more suited for web development rather than programming contests/interviews ) or Or brush up the C,C++ skills and keep them as primary tool for tackling interview questions. What are advantages/ disadvantages of each language for an environment like programming contest or an interview ? Why would you recommend,not recommend Programmer A to participate in a contest like google code Jam/ ACM ICPC using PHP instead of C++ ? ( assuming PHP is allowed as a language there)

    Read the article

  • How to enable Jetty to support cometd/reverse ajax while let it listen to port 80?

    - by janetsmith
    I would like to use cometd / reverse ajax capability of Jetty 7. I tried to configure it so it listen to port 80, instead of 8080. However, according to http://jetty.mortbay.org/jetty5/faq/faq%5Fs%5F200-General%5Ft%5Fapache.html , Apache can be configured as a HTTP/1.1 proxy to pass selected request to the Jetty using the HTTP/1.1 protocol. This is simple to configure and use, but current versions of the apache mod_proxy do not support persistent connections. As far as I know, the reverse ajax in jetty is depending on continuation (I guess it is persistent connection). So how to let jetty support reverse ajax, while coexist with apache server? Thanks.

    Read the article

  • How do I change the .bash_history file location?

    - by Brian Graham
    I'm running CentOS 6.x and want to move the .bash_history to a different location. The home directories of my users are (because I run a VPS) in /var/www/vhost/<domain>.<tld> which is FTP accessible (and it should be). Because of this, I have changed the AuthorizedKeysFile for SSH connections out of the normal ~/.ssh/authorized_keys since FTP connections would easily be able to locate them. At the same time I want to move the .bash_history file to /home/%u/.bash_history where %u is the current user.

    Read the article

  • How can I anonymize my browser useragent, yet still be counted as a FF/Ubuntu user?

    - by Rory
    I read about EFF's Panopticlick project to see how unique your webbrowser's headers are. I would like to anonymize that a bit. My current User Agent is Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.7) Gecko/20100106 Ubuntu/9.10 (karmic) Firefox/3.5.7 I would like to make that more anonymous, however I still wanted to be counted as a Firefox and Ubuntu user. How can I change my User Agent in Firefox? What should I change it to so that it's less unique, but will be counted as a Firefox user and a Ubuntu user on web analytics software? I know that there is no guarantee that I will be counted a Firefox/Ubuntu user, just something that 'works most of the time' would be sufficent.

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • Online File Sharing that acts just like LAN shared drives, etc.

    - by Dayton Brown
    Hi All, Have a small business client that wants to move their current file share to the web. Specs are as follows, 20 to 30 GB of space, file sizes are normal (nothing more than 50 to 100 mb) 3 users ideal solution would be exact same functionality as windows explorer. CHEAP!!! But not super cheap. I would like to keep it around $20 per user per month. I've explored a bunch of solutions, but they are all a bit on the complicated side. Thanks in advance for the recommendations.

    Read the article

  • I'll be setting up a dedicated web server at work soon, my first non hobby server - What should I know?

    - by Rogue Coder
    I've been running my own dedicated server running CentOS and a LAMP stack for 2-3 years now, but it's only been hosting my own websites which aren't super important. However, I will soon be setting up a Linux Webserver and Linux Database Server at work, and I'm wondering what are some important things I should be doing. It's an internal server only, so only people in the company can access it. Should I get a slave server for both of my servers for backups? If I do this, how many backups should I be keeping and how often should those backups be done? Right now on my current server I run a cron job nightly to backup my MySQL databases (Usually 40mb files once compressed), and bi-weekly cron jobs to backup my web root. I just store these files on my local computer via FTP. Also, for an internal server like this, should I look at using LightHTTPD or NginX to increase performance, or will Apache be fine?

    Read the article

  • What's the easiest way to allow Exchange 2003 remote (no MSO client) users check their Mailbox size?

    - by Myrddin Emrys
    We are migrating from Exchange 2003 with no quota settings to Exchange 2010 with limited mailbox sizes. We are trying to get users to clean their mailboxes prior to the move to reduce the transfer load, as well as to comply with new quotas on the 2010 system. But many users access their mail through webmail only. I cannot see a way for users to access their mail store size in this manner. Has anyone else run into this problem? Is there a good way to easily let users check their own mailbox size? The only thing I've come up with as a workaround is a report that IT generates and mail-merge it out to users daily with their current mailbox size. This is cumbersome and time consuming compared to a way for them to check their own mailbox size however.

    Read the article

  • Managing service passwords with Puppet

    - by Jeff Ferland
    I'm setting up my Bacula configuration in Puppet. One thing I want to do is ensure that each password field is different. My current thought is to hash the hostname with a secret value that would ensure each file daemon has a unique password and that password can be written to both the director configuration and the file server. I definitely don't want to use one universal password as that would permit anybody who might compromise one machine to get access to any machine through Bacula. Is there another way to do this other than using a hash function to generate the passwords? Clarification: This is NOT about user accounts for services. This is about the authentication tokens (to use another term) in the client / server files. Example snippet: Director { # define myself Name = <%= hostname $>-dir QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 3 Password = "<%= somePasswordFunction =>" # Console password Messages = Daemon }

    Read the article

  • How to know if my nginx is in good health?

    - by Howard
    I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around 2Mbps and system load average is around 2 to 3. I am wondering if this system is in good health for now, e.g. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) what is the average queue time for a given request to be served. I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470

    Read the article

  • Dump nginx config from running process?

    - by Sergio Tulentsev
    Apparently, I shouldn't have spent sleepless night trying to debug an application. I wanted to restart my nginx and discovered that its config file is empty. I don't remember truncating it, but fat fingers and reduced attention probably played their part. I don't have backup of that config file. I know I should have made it. Good for me, current nginx daemon is still running. Is there a way to dump its configuration to a config file that it'll understand later?

    Read the article

  • How can I prevent users from installing software?

    - by Cypher
    Our organization is a bit different than most. During certain times of the year, we grow to thousands of employees, and during off-times, less than a hundred. Over the course of a few years, many thousands of people have come and gone in our offices, and left their legacy behind in the form of all sorts of unwanted, unapproved, (and sometimes unlicensed) software installs on our desktops. We are currently installing redundant domain controllers and upgrading current servers, all running Windows Server 2008 Enterprise, and will eventually be able to run a pure 2008 DC network. With that in mind, what are our options in being able to lock down users, such that they cannot install unauthorized software on systems without the assistance (or authorization) of the IT group? We need to support approximately 400 desktops, so automation is key. I've taken note of the Software Restrictions we can implement via Group Policy, but that implies that we already know what users will be installing and attempting to run... not quite so elegant. Any ideas?

    Read the article

  • Upstart: accept user input to switch xorg.conf

    - by Utaal
    Hi all! I'm trying to get a startup script requiring user input running before gdm starts (the script should allow me to choose from a list of xorg.conf's the one I'd like to use for the current session). Already tried creating a pregdm.conf in /etc/init, containing: start on filesystem stop on runlevels [06] # ... console output script # script that uses read to gather user input and replaces xorg.conf with the selected one end script and changing start on in /etc/init/gdm.conf to: start on (filesystem and started dbus and started pregdm and (drm-device-added card0 PRIMARY_DEVICE_FOR_DISPLAY=1 or stopped udevtrigger)) Echos are displayed in console but I can't make it wait for user input: gdm is started straight away. Any pointers? Thanks a lot

    Read the article

  • Costs/profit of/when starting an indie company

    - by Jack
    In short, I want to start a game company. I do not have much coding experience (just basic understanding and ability to write basic programs), any graphics design experience, any audio mixing experience, or whatever else technical. However, I do have a lot of ideas, great analytical skills and a very logical approach to life. I do not have any friends who are even remotely technical (or creative in regards to games for that matter). So now that we've cleared that up, my question is this: how much, minimally, would it cost me to start such a company? I know that a game could be developed in under half a year, which means it would have to operate for half a year prior, and that's assuming that the people working on the first project do their jobs good, don't leave game breaking bugs, a bunch of minor bugs, etc.. So how much would it cost me, and what would be the likely profit in half a year? I'm looking at minimal costs here, as to do it, I would have to sell my current apartment and buy a new, smaller one, pay taxes, and likely move to US/CA/UK to be closer to technologically advanced people (and be able to speak the language of course). EDIT: I'm looking at a small project for starters, not a huge AAA title.

    Read the article

  • Wi-fi signal with keeping the internet cable

    - by daGrevis
    So the situation is that I have Ethernet cable which provides internet to my computer. Thing I want is to have wi-fi connection in my house and Ethernet cable (like I have before) to use for my PC. I will use wi-fi for my laptop and mobile phone. I think I need router for that and I'm looking at Asus RT-N16 (suggested in Coding Horror) for it, but I am not sure. Is it the right thing for me and I will be able to get wi-fi signal and keep the Ethernet cable? I guess the system will be that current cable goes into router, router provides wi-fi signal and gives back new cable... or something like that. Thanks in any advice! And sorry if topic isn't in right site.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >