Search Results

Search found 4893 results on 196 pages for 'expect'.

Page 127/196 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Can I install windows on an SSD and access data from my old windows HDD?

    - by nzifnab
    I purchased new computer components, switching my hardware from AMD and Radeon to Intel and Nvidia. I kept components from my old computer like the powersupply and two HDDs. Everything appeared to install correctly and the system booted into the BIOS just fine (after a brief snafu with the CPU fan). My goal was to use the two harddrives and just be able to turn on the computer and load up my old windows install with all the files, programs, and documents. I expected to have to call Microsoft to re-register the windows install for the new hardware (since I had to do that last time I upgraded w/ the same windows version). When the computer attempts to boot into windows it briefly flashes a bluescreen and then restarts. System recovery gives a message something like "BadDriver Failover" something something. I assume this is because it's trying to use amd drivers for an intel chipset (or something...?) and I've been as-yet unsuccessful in getting it to boot into my old windows partition. SO! I decided eff it, maybe I'll go visit my nearest Micro Center and buy a 200 GB SSD, install windows onto that, and then... be able to access just the contents of both of my other harddrives? I don't intend on running any of the programs but there were some saved files I would like to salvage from the 500 GB harddrive, The 1.5 TB harddrive only had files on it, no OS or applications so I'd also expect to still be able to access it. Is this possible? Can I install the SSD and only format/install windows onto it and still access the contents from my two transferred-over drives?

    Read the article

  • How to get desired FireFox last tab behavior?

    - by JustJeff
    All tabs should be the same; so if any of the have a 'close' button, they all should, including the last tab. I see no reason that a tab's close button should suddenly vanish simply b/c that tab has become the last one open. If I have N tabs open, and park the mouse over the left-most tab's close button, this vanishing close button trick means now I have to make a large mouse move to get to the app's close button. Unsat. Mouse moves = too many milliseconds wasted. Closing the last tab should NOT take me to my home page, or any other page whatsoever. I want the browser to close with the last tab. I do not expect or want "new tab" behavior when I click a Close button. Now, I've gone into about:config and played with browser.tabs.closeWindoWithLastTab, but this setting oversteps its purpose; while it does make the browser close, for some inexplicable reason, it also suppresses the last tab's close button! I have tried the "last tab close button" add-on, and while this does restore the close button, the add-on oversteps by taking the liberty of turning closeWindowWithLastTab off. Is there some way out of this pickle? Is it too hard to just code things to provide simple, orthogonal actions, so that everybody can config the UI to their liking, and not just to a few pre-fab configurations that the developers think everyone should like? Btw, FF 13.0.1 on ms windows

    Read the article

  • Nexenta, NFS and LOCK_EX

    - by Givre
    I'm currently using an LAMP architecture and I expect a big problem :( I have several http web server using PHP5. All are mounting via NFS (v3) the directory for all the hosted websites. The file server is running the Nexenta Storage Appliance using ZFS . The problem is all the NFS client trying to write something in a file over the NFS get this problem : This is inside the apache2 process: open("/nfs/website1/file.txt", ORDWR|OCREAT, 0600) = 11647 fstat(11647, {stmode=SIFREG|0600, st_size=23754, ...}) = 0 flock(11647, LOCK_EX And the process never get the LOCK and keep waiting for... always. The effect? All the apache2 procces get used and waiting.. my severs can't still proccess the others requests because there is no more proccess available. I don't now where to find a solution.. for me it.'s on the NFS server side.. but wich configuration is wrong or missing ? How can I find what is wrong? If you need more information about the configuration, just ask me what can help you more :)

    Read the article

  • Run Python script at startup using upstart

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec python /usr/local/scripts/script.py end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "UpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • Bizarre client IP switch-up on VPN

    - by B. VB.
    Let A.B.C.D be the public IP of my VPN server. Let W.X.Y.Z be the IP of the client before it connects to the VPN. My VPN server's IP address on the LAN in 10.8.0.1, and the client is 10.8.0.6. I also run a webserver on the same machine hosting the VPN. On it is a simple webpage that performs the exact same thing as whatismyip.org (i.e., simply prints the IP of the requester) Let me illustrate the scenario for you. In a Chrome window I have three tabs, what I have in parenthesis is the URL: Tab 1 (http://whatismyip.org): A.B.C.D This is what I expect to see. It's the public IP of the VPN server. Tab 2 (http://10.8.0.1): 10.8.0.6 ok, looks expected. They are behind the same LAN now. Tab 3 (http://A.B.C.D) W.X.Y.Z WTF?? Basically, if I access the webserver while tunneled, in shows the IP address of my machine PRIOR to tunelling! Remember, tab2 and tab3 are the same webpage. Why does Tab3 not show the client IP as it's own IP (i.e., show A.B.C.D)??? I hope this question is clear, thanks in advance!

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • VMWare Server modifying files related to paused VMs, is this expected?

    - by David Spillett
    While refreshing the backup of a VM used for testing, I experienced the following warning from tar: tar: /VMsR0/cli_noddyco_test/VM2K8_32_web.vmem: file changed as we read it The VMs in question were paused at the time. My first though was that I'd mixed up the machines and was trying to backup something that was still actively running. To be sure I unpaused and properly shut down the VM, and the vmem files that tar reported changing vanished as I would expect. Is it normal for VMWare Server to touch or alter files for paused VMs like this, or is there likely something amiss with our setup? If this is expected behaviour, is just touching the vmem file (and so altering the last modification date without actually changing content)? If it is normal for files relating to paused VMs to be updated I shall have to revise our backup procedures to make sure the VMs are fully shut down fully rather than just pausing them (this isn't a problem, but it seems strange and I'd prefer to understand what VMWare is doing and why instead of just dismissing it as "one of those things" and working around it). For further detail: the host in question is VMWare Server version 2.0.2 running on 64-bit Debian/Lenny, and that VM did not have a snapshots at the time. We have backed up paused VMs this way in the past with no such warnings from tar.

    Read the article

  • Apache Virtualhost entry with Windows hostname

    - by gshauger
    I have a Windows Domain Controller and we use it for DNS for our internal network. I have an Ubuntu box with an IP address of 172.16.34.149. Within the Windows DNS I created the forward and reverse lookup entries for the name Endymion. Naturally when ever I FTP/SSH/HTTP/etc to the hostname Endymion it resolves correctly to my Ubuntu box. I wanted to do some web development on this box for an existing site. There were problems when I placed the website in a subfolder of /var/www/. Let's just say it was in folder /var/www/projectx/. The issue involved the incorrect resolution of non-relative urls. So I figure I could create a new DNS entry for the hostname projectx. Sure enough when I FTP/SSH/HTTP/etc to the hostname projectx it takes me to the same ubuntu box as the hostname Endymion...this is what I would expect. I now have two hostnames for the same box. I then create a Virtualhost entry in httpd.conf that looks like the following: <VirtualHost *:80> DocumentRoot /var/www/projectx ServerName projectx ServerAlias projectx </VirtualHost> Sure enough when I go to a browser and type in http://projectx/ it takes me to the correct subfolder. Everything works!!! Not so fast. I then go to http://endymion/ and instead of taking me to /var/www/ it takes me to /var/www/projectx/ Clearly I'm missing something. Help please! ;)

    Read the article

  • Buying an old laser printer -- what will need to be replaced?

    - by marienbad
    Hi all -- as you can see I'm new. I do IT and wiring for a small local shop but I never deal with printers. I do a LOT of printing, and I'd like to stop spending as much money on it. On my local CL, there is an HP 8100DN (duplex network) printer for a very good price (and the toner is a quarter-cent per page). It has printed 200,000 pages and I don't yet know anything else about it. The model was released in 1999. So my questions: What are the parts that tend to need service on laser printers? On ebay, I see fusers, rollers, DC power boards, and motors. What would you expect to replace soon at 200,000 pages? Are there any good "tests" to find out if certain parts are near failure? Do you have anything to say about the HP 8100 specifically? The bottom line for me is that if there's any chance of repairs costing more than $100, it's not worth it for me.

    Read the article

  • SSH & SFTP: Should I assign one port to each user to facilitate bandwidth monitoring?

    - by BertS
    There is no easy way to track real-time per-user bandwidth usage for SSH and SFTP. I think assigning one port to each user may help. Idea of implementation Use case Bob, with UID 1001, shall connect on port 31001. Alice, with UID 1002, shall connect on port 31002. John, with UID 1003, shall connect on port 31003. (I do not want to lauch several sshd instances as proposed in question 247291.) 1. Setup for SFTP: In /etc/ssh/sshd_config: Port 31001 Port 31002 Port 31003 Subsystem sftp /usr/bin/sftp-wrapper.sh The file sftp-wrapper.sh starts the sftp server only if the port is the correct one: #!/bin/sh mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -eq $current_port ] then exec /usr/lib/openssh/sftp-server fi 2. Additional setup for SSH: A few lines in /etc/profile prevents the user from connecting on the wrong port: if [ -n "$SSH_CONNECTION" ] then mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -ne $current_port ] then echo "Please connect on port $mandatory_port." exit 1 fi fi Benefits Now it should be easy to monitor per-user bandwidth usage. A Rrdtool-based application could produce charts like this: I know this won't be a perfect calculation of the bandwidth usage: for example, if somebody launches a bruteforce attack on port 31001, there will be a lot of traffic on this port although not from Bob. But this is not a problem to me: I do not need an exact computation of per-user bandwidth usage, but an indicator that is approximately correct in standard situations. Questions Is the idea of assigning one port for each user is a good one? Is the proposed setup an reliable one? If I have to open dozens of ports for many users, should I expect a performance drawback? Do you know a rrdtool-based application which could make the chart above?

    Read the article

  • Is there a local yubnub.org replacement?

    - by Justin Keogh
    I use yubnub very often... every google search I do by just (in firefox) "ctrl-t" - (now in the url bar) "y g searchterms" [Enter] "y" in this case is a search keyword I added by right clicking in the yubnub.org command box it's really fast, and I just do it automatically now... but the problem is now I am stuck with whatever the yubnub command that I am so used to using does. I cant change it... for example, what if I dont want to use google... but I still want to use the "g" command to search? or say I want to use google's https search... ect... I suppose this would be kinda trivial to implement locally... but I would hate to re-invent the code if it's allready done and in use... ideas? Also a local yubnub.org replacement would save me the DNS lookup and traffic to yubnub.org. I dont expect to be able to import all commands from yubnub.org but that would be cool if possible.

    Read the article

  • What are possible results/side effects if replication between DC's in a Windows domain is unable to occur?

    - by hydroparadise
    There's plenty of administration literature out there how to properly manage Windows servers. But in dealing with real life, things don't always occur like you want them to. In Microsoft's Windows Server 2003 Administrator's Companion, out of 1400+ pages, theres only one page that I could find when it comes up setting up additional domain controlers. They make it sound seemless and don't reveal a whole lot on what happens if "peer" DC's are unable to replicate. Down to the specific issue at hand, we had a DC go down about a month ago due to a bad RAID controller. There was nothing critical that waranted imediate attention, so bringing it back up got put on the back burner. A month later, we get the DC back up and running and everyting seemed ok. The next day, nobody is able to logon complaining that the "user does not exist" or "unable to establish a trust relationship". Knowing that I had just put the downed DC back on the network, I immediately took it back off the network and had everybody restart the workstations. After that, exchange was fine, shares became available, and everybody was able to log in. After doing some event log swimming, it would appear that everything started due to replication issues on the SYSVOL. I've read where you can force replication, but that would mean putting it back on the network. I am afraid to put the DC back on the network in fear that something else could go wrong. So, what other issues could one expect to run into where two DC's are unreplicated for over a month?

    Read the article

  • Running router as virtual machine, can it be realible ?

    - by Kr1stian
    Hi all Does anyone here run their routing through virtual machine, have virtual machine setup as main router/getaway etc ? If yes, how many clients are using this kind of setup ? For those who are wondering why I'm asking this. I got assignment for my internship to create all in one "box" which would do routing and be IP PBX in one time ( only open source solutions can be used, expect RouterOS). The routing part is currently done through RouterOS and for VoIP they want to use sipXecs. RouterOS supports virtualization through KVM, but RouterOS itself only supports 2GB of memory ( and wont support more in near future). sipXecs needs allot more than 2GB. I told them that we could solve this problem by putting RouterOS as virtual machine to 64bit hostOS ( e.g. CentOS), and other virtual machine would run sipXecs. By that we would be able to use whole memory. But they told me that it's to risky to do something like that and that they need something with "enterprise stability/reliability". I told them that we could make redundant image of each VM which would automatically start if one VM stop's working, but I was told the same thing. So this is why I asked those question above, to see if I really suggested something that's not good to do, or maybe this is something completely normal and it can be done with "enterprise stability/reliability" :) Thank you for answers, Kristian

    Read the article

  • Linux Firefox copy/paste issue

    - by Daniel
    I'm using Firefox 15.0.1 on Fedora 17 without running gnome or kde. The problem I'm having is that whenever I select text outside of Firefox, for instance in xterm, the middle mouse button doesn't copy it inside Firefox, for instance in a text area, but rather brings up a context menu. A related problem is that whenever I middle click inside Firefox, for instance in a text input, the middle mouse button brings up a menu when I'd like it to paste. Even if I select Paste in the menu not the selected text (from outside Firefox) gets pasted but the last selected text inside Firefox. In about:config I tried "middlemouse.paste true" and also "middlemouse.paste false" together with the add-on Auto Copy but no combination worked. A middle mouse click always brings up a context menu. But the Auto Copy did help with automatically copying selected text to the clipboard. With Auto Copy the only problem I still have is pasting by middle button. Follow up: somehow the problem solved itself. After removing Auto Copy firefox works as I expect it to (as any X application). I can't figure out why it was not doing it before, probably I was messing too much with about:config and not restarting frequently enough.

    Read the article

  • What's the safest way to kick off a root-level process via cgi on an Apache server?

    - by MartyMacGyver
    The problem: I have a script that runs periodically via a cron job as root, but I want to give people a way to kick it off asynchronously too, via a webpage. (The script will be written to ensure it doesn't run overlapping instances or such.) I don't need the users to log in or have an account, they simply click a button and if the script is ready to be run it'll run. The users may select arguments for the script (heavily filtered as inputs) but for simplicity we'll say they just have the button to choose to press. As a simple test, I've created a Python script in cgi-bin. chown-ing it to root:root and then applying "chmod ug+" to it didn't have the desired results: it still thinks it has the effective group of the web server account... from what I can tell this isn't allowed. I read that wrapping it with a compiled cgi program would do the job, so I created a C wrapper that calls my script (its permissions restored to normal) and gave the executable the root permissions and setuid bit. That worked... the script ran as if root ran it. My main question is, is this normal (the need for the binary wrapper to get the job done) and is this the secure way to do this? It's not world-facing but still, I'd like to learn best practices. More broadly, I often wonder why a compiled binary is more "trusted" than a script in practice? I'd think you'd trust a file that was human-readable over a cryptic binaryy. If an attacker can edit a file then you're already in trouble, more so if it's one you can't easily examine. In short, I'd expect it to be the other way 'round on that basis. Your thoughts?

    Read the article

  • Why doesn't the value in /proc/meminfo seem to map exactly to the system RAM?

    - by Eric Asberry
    The values in /proc/meminfo for MemTotal don't make sense. As a human, eyeballing it, it seems to roughly correspond to the installed RAM, but for using it to display the installed RAM from an automated utility it appears to be inexact, and inconsistent. For a system with 1G of RAM, I would expect the MemTotal line to have a value of 1048576 - 1024*1024. But instead, I'm seeing 1029392. On another 4G box, I'm seeing 3870172, which is not a multiple of 1024, and it's not even close to 1029392*4. On an 8G box, I get 8128204, which again seems to have no correlation to the other values, nor is it a multiple of 1024. I'm trying to use this information to report the RAM on a status web page. My work-around is to just "round" it to the nearest 1G multiple, but I'd like to understand why these values seem inconsistent and don't match my expectations. Can somebody fill me in on what I'm missing here? EDIT: To expand on the accepted answer below.... The reference can be found here. Also of interest to me from that page, which explains the inconsistency, is this bit: meminfo: Provides information about distribution and utilization of memory. This varies by architecture and compile options. ...

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • MySQL : table organisation for very large sets with high update frequency

    - by Remiz
    I'm facing a dilemma in the choice of my MySQL schema application. So before I start here is a picture extremely simplified of my database : Schema here : http://i43.tinypic.com/2wp5lxz.png In one sentence : for each customer, the application harvest text data and attached tags to each data collected. As approximation of the usage of each table, here is what I expect : customer : ~5000, shouldn't grow fast data : 5 millions per customer, could double or triple for big customers. tag : ~1000, quite fixed size data_tag : hundred of millions per customer easily. Each data can be tagged a lot. The harvesting process is permanent, that means that around every 15 minutes new data come and are tagged, that require a very constant index refreshing. A lot of my queries are a SELECT COUNT of DATA between specific DATES and tagged with a specific TAG on a specific CUSTOMER (very rarely it will involve several customers). Here is the situation, you can imagine with this kind of volume of data I'm facing a challenge in term of data organization and indexing. Again, it's a very minimalistic and simplified version of my structure. My question is, is it better: to stick with this model and to manage crazy index optimization ? (which involves potentially having billions of rows in the data_tag table) change the schema and use one data table and one data_tag table per customer ? (which involves having 5000 tables on my database) I'm running all of this on a MySQL 5.0 dedicated server (quad-core, 8Go of ram) replicated. I only use InnoDB, I also have another server that run Sphinx. So knowing all of this, I can't wait to hear your opinion about this. Thanks.

    Read the article

  • How should an experienced Windows SysAdmin learn Linux? [closed]

    - by Systemspoet
    I have a new hire starting in a few weeks who is an experienced Windows SysAdmin. I think he's fairly senior on the Windows side, with a pretty deep AD understanding and experience with Exchange 2007, 2010, and exchange migrations. He's done a little PowerShell but I suspect more of the "run this command to do this" variety then "write a script to do this" sort. However, we are a mixed shop and (he knows this) I expect him to become a reasonably competent Linux SysAdmin over time. I'm looking for good starting points to bring him along. I have over ten years of Linux/UNIX experience, so it all sort of seems intuitive to me, but I've been thinking about the toolkit you actually need to be productive in the Linux CLI world. Just to be able to use the machines at all, off the top of my head... vi Basic CLI stuff -- move around, rename files, copy files, tar, gzip, changing passwords, finding relevant manpages, keep track of where you are, find things in your history, etc, etc. More advanced things that I take for granted but are actually pretty hard -- doing things with 'find', extracting relevant text via 'awk' and/or 'cut', knowing when to use 'grep' and when to use 'grep -e' or 'egrep'. Distribution specific stuff... compiling software, rpm, yum, apt-get, you name it. This all seems pretty basic to me, but when I think back to 1995 when I was first learning my way, some of those things took me years to master. So my question is -- where should I send him to pick up those skills? I'm not just thinking of classes, but rather also websites and books? Where do you all suggest as a starting point for picking up Linux skills?

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • How many guesses per second are possible against an encrypted disk? [closed]

    - by HappyDeveloper
    I understand that guesses per second depends on the hardware and the encryption algorithm, so I don't expect an absolute number as answer. For example, with an average machine you can make a lot (thousands?) of guesses per second for a hash created with a single md5 round, because md5 is fast, making brute force and dictionary attacks a real danger for most passwords. But if instead you use bcrypt with enough rounds, you can slow the attack down to 1 guess per second, for example. 1) So how does disk encryption usually work? This is how I imagine it, tell me if it is close to reality: When I enter the passphrase, it is hashed with a slow algorithm to generate a key (always the same?). Because this is slow, brute force is not a good approach to break it. Then, with the generated key, the disk is unencrypted on the fly very fast, so there is not a significant performance lose. 2) How can I test this with my own machine? I want to calculate the guesses per second my machine can make. 3) How many guesses per second are possible against an encrypted disk with the fastest PC ever so far?

    Read the article

  • how can a web page change my mouse speed?

    - by Tekaholic
    I usually have many tabs open in Firefox and I haven't been able to find one specific website that causes this because I don't seem to notice it right away. I'm going to click on something on my desktop and I am lifting up the mouse several times to get across the screen. It doesn't seem to matter what program I might be using because this happens on all desktops and in Firefox, too. So I go in my settings and I turn up the mouse speed all the way and it's still not really acceptable. It doesn't matter if I click on different tabs but when I close the browser, my mouse is way too sensitive, like I'd expect at the max setting. Then I go back to Control Center and return my mouse speed and acceleration to normal. When I restart my browser, the mouse remains normal. So is there something to this before I start wasting my time hunting through my history to discover which website or sites are having this effect? ...and if it is a specific site and I locate it, what can I change to stop it's effect on my mouse besides not visiting it? I am using Linux Mint 13 on a box with an AMD Athlon processor and 2gigs of ram. I never installed another browser because everything works for me.

    Read the article

  • CentOS: AJAX/jQuery not working

    - by Australiya
    In a nutshell, I have an unmanaged VPS. At one time, it had Ubuntu 10.10 server on it, then I reinstalled it with CentOS 6 and updated it to CentOS 6.2. Now, the problem is, the AJAX/jQuery shoutbox has ceased working (I assume it uses one of the two to inject itself into a div and then refresh when new messages are posted, I'm not sure, I didn't write these), and the plug-board script now shows me a lime green blank page. No changes to the source codes have been made, and they are in the location they expect themselves to be. I have Apache2, MySQL 5, PHP 5, and I did install the php-xml libraries. What am I missing? It's gotta be server side because the scripts themselves are fine, if I move them to a different server they work just fine. I'm not getting any errors related to this in the error_log file. Thanks in advance! Edit: If you want, you can look at the plugboard at kazeshini.net/plugboard and there's an installation of the chatbox at silverlotus.kazeshini.net/yshout/example, I know nothing about scripts and debugging so better someone else looks at it than someone that doesn't know what they're looking for.

    Read the article

  • Access 2010 datasheet view only/relationships unavailable

    - by Luis
    I'm relatively new to MS Access in general and just started working with Access 2010. I've created a new web database with a few tables that I need to relate. First problem: For the life of me, I can't view anything in any view other than datasheet view; everywhere I would expect to be able to change the view, only datasheet view is available. Second problem: I can't change the primary key(s). Presumably I would be able to do this if I could get out of datasheet view and into design view. Third problem: The 'Relationships' button is greyed out. I know these appear to be really simple things but I've been looking for much more time than I'd like to admit trying to figure out how to get unstuck. Update: It would appear that this is happening because it is a 'web database' as I've been able to do all of the above in a new regular database. With this in mind let me ask a different question: Am I able to add relationships and change primary keys in a web database? If so how? More generally, what is the point of a web database?

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >