Search Results

Search found 16987 results on 680 pages for 'second'.

Page 462/680 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • What is the difference between disabling hibernation and idling time for a NAS?

    - by Gary M. Mugford
    I have two D-LINK DNS-323 NAS boxes with two Seagate drives in each. The first one is about a year old, the second one about three months. The first two on Monster are each 1.5T drives while the last two on Origami are 2T drives. I have never been overly happy with the Monster drives but, outside of poor throughput on small files, they have been consistently available to all programs after I put a batch file into my startup to do a directly listing of each. I added the two new drives when I added the Origami box. But, watching the dos box that comes up, I rarely see both listed before the box disappears. Other programs, backups, Belarc, even my file browsers, seem to have a dickens of a time seeing O: and P:. Finally, I decided to go into setup and turn off hibernation. Performance HAS been better since and Belarc, for instance, now sees both drives. At the time of poking around, I noticed an Idle Time feature too. What is the difference between the two settings? And for added points, how much trouble am I in for turning off hibernation? The super bonus round ... anything ELSE I should have done? Thanks in advance, GM

    Read the article

  • Integrating Nagios with a ticketing system/incident mnagement system

    - by sektor
    Is there a free ticketing system/incident management system which will help me in achieving the following? 1) If a service goes down then Nagios alerts the on-duty staff and pushes the status to some backend or DB as a ticket, say the initial status is "New". 2) The on-duty staff logs in through a frontend and acknowledges the new ticket by marking it as "In progress", so now the status of the ticket changes from "New" to "In progress". 3) If even after "n" number of minutes no person from on-duty staff has changed the ticket status to "In progress" then Nagios alerts the next level of contacts. Although if the on-duty staff has acknowledged the ticket then there is no need to alert the next level. 4) When the service comes up Nagios closes the ticket by marking it "Closed" Now I already have Nagios monitoring set up and currently it alerts by sending text messages and mails, what I'm looking for is some framework which only escalates the issue(alerts the second level) if the first level(on-duty staff) fails to respond to the initial alert. By "responding to the alert" I mean, the on-duty staff can login via some frontend and basically change the status to something like "Acknowledged" or "In progress".

    Read the article

  • Linux And NTFS Permissions

    - by VGE IT
    Trying to restrict a folder within a directory created in linux filesystem. I have changed the permissions to: root rwx, a special active directory group rwx and all others r. Upon doing so, people that are not in the special AD group can access the directory and modify files. Upon doing so the group changes to "Domain Users" when the user modifies documents within the directory. I have to manualy change the documents default group back to my AD group. I have tried to create another AD group and modify permissons to deny write access. When doing so through windows explorer, the settings seem to take affect until I go back in a look at permissions for the restricted group. No permissions show when I view for the second time. Please assist. Samba share properties [MyShare] comment = "blah blah blah" browseable = yes guest ok = no read only = no path = /xxx/xxxxx/ create mask = 0640 directory mask = 0750 admin users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" valid users = @"domain\Domain Admins", @"domain\group A", @"domain\group B" nt acl support = Yes inherit acls = yes inherit owner = yes inherit permissions = yes

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • I want to add a Quality Assurance domain. How do I handle DNS servers?

    - by Tim
    I'm advising a large client on how to isolate their dev and testing from their production. They already have one domain, lets say xyz.net with the active directory domain as "XYZ01". I want to add second domain say QAxyz.net and make its active directory domain "QA01" All development and QA servers would be moved to the QAxyz.net domain, the machines would be part of the QA01 domain. Note: Some of these servers will have the same name as the production servers for testing purposes. I believe we would have separate DNS servers for each domain. If I am logged into the QA01 domain, to access the production domain I would qualify my access like so: \PRODSERVER.xyz.net login: XYZ01\username Do I need to add a forwarder to my QAxyz.net DNS server so that it can see xyz.net? Would I need to do the same to the xyz.net DNS server to see QAxyz.net? I don't know how to advise them in this. Does anyone have any other recommendations to isolationg a QA domain? Many Thanks in advance! Tim

    Read the article

  • How to figure out what VirtualBox did?

    - by AndrejaKo
    I'm trying to boot a custom made-in-ASM OS on my recent laptop. The OS is intended to be installed on a floppy and during make creates a bootable floppy. Since I don't have a floppy drive, I installed it on a virtual floppy. After that I used WinToFlash's create bootable MS-DOS USB drive option to transfer the floppy image to an USB flash drive. Then I tried to boot my computer from it but got only a repeating broken string on screen. After all that I made a virtual hard disk image form the flash drive using this tutorial and tried to boot a virtual machine from it. First time I got same problem as on real computer. I then used the reset option and next time and every time after that OS booted correctly. My question is: How do I figure out what exactly happened to the virtual machine between first and second boot? UPDATE I just created a new VM with default settings for windows XP and it has the same problem that I have on a real computer. I was unable to reproduce the procedure which made the first VM work correctly.

    Read the article

  • Eclipse on Ubuntu: Rectangles instead of Strings and some Java methods and classes

    - by Claus Hausberger
    after upgrading from Ubuntu 9.04. to 11.04 (new installation), I have weird problems with the Eclipse editor. With the Eclipse PyDev plugin, whenever I typ single quoted strings like 'bla', they appear as rectangles (both the quotes as well as the string). First I thought this was a problem with the PyDev plugin, but it also happens with Java and Scala Plugins. With Java, it happens, for example, when typing System.out.println("bla") and then "out" is shown as rectangles only. Weird is that for about half a second I see "System.out.println" and then the editor changes it to System.[][][].println (not really [] (here I used two brackets), it is shown as rectangles). This is very weird. I've never had this before with any Ubuntu, Java or Eclipse version. Currently, I use: Ubuntu 11.04. Eclipse 3.6 Java 1.6.0_25 The latest plugins for Python (2.1) and Scala (beta 5) where used. Eclipse and Ubuntu Terminal is set to UTF-8. The problem also happens when using KDE instead of Gnome. I doubt is has anything to do with Java as I use the same versions on older Ubuntu installations (10.04, 9.10, etc) at work. It does not happen with Netbeans. But I saw once error dialog message from the Update Manager where there were some rectangles in the error widget. Maybe this is the same problem Any ideas what could be wrong here and how to fix this? Eclipse is unusable but I need this for work and also for Scala and Python (the Eclipse plugins for those are very good now). Claus

    Read the article

  • A-2-Z web hosting on Amazon AWS

    - by JDelage
    All, I am studying web dvp, and one of my classes is project-based. We have to build a functional site that demonstrate our understanding of: HTML, CSS, Javascript, php, MySQL, And potentially Ajax or some other web component. For the project, we can use a local server using WampServer and basically build the site entirely on our laptop. If I have time, I would like to create a real site, and I thought it would be a good way to familiarize myself with Amazon's AWS services. So if I purchase a domain name, can I rely on AWS to host the site from A-to-Z? I understand I can use AWS to host content, the database, and do the background computations, if needed. What else do I need and what are the parts that AWS cannot help me with? Second, is there good documentation for a beginner to navigate AWS and learn how to use it (either on Amazon, or some 3rd party sites, or even a good book, as long as is up to date). The ideal documentation would be a tutorial on creating a web site from a-to-z on AWS, as detailed as possible. As you can guess, I have limited understanding of the IT issues. I have 0 Linux or sysadmin experience, but this is a good opportunity to change that. I hope you can help me. Thank you, JDelage PS: Please keep the answers AWS-specific. At this point, I am only interested in alternative services to the extent that they plug a hole in Amazon's offering.

    Read the article

  • ps aux as non-root doesn't show all processes

    - by JMW
    hi, i'm using an ubuntu 10.04 server... when i run ps aux as root i see all processes when i run ps aux as nonroot i see JUST the processes of the current user after a bit of research i found the following solution: root@m85:~# ls -al /proc/ total 4 dr-xr-xr-x 122 root root 0 2010-12-23 14:08 . drwxr-xr-x 22 root root 4096 2010-12-23 13:30 .. dr-x------ 6 root root 0 2010-12-23 14:08 1 dr-x------ 6 root root 0 2010-12-23 14:08 10 dr-x------ 6 root root 0 2010-12-23 14:08 1212 dr-x------ 6 root root 0 2010-12-23 14:08 1227 dr-x------ 6 root root 0 2010-12-23 14:08 1242 dr-x------ 6 zabbix zabbix 0 2010-12-24 23:52 12747 [...] my first idea was, that it got mounted in a weird way: /etc/fstab is ok and it doesn't seem to be mounted in an weird way... my second idea was, that there might be a rootkit: but it's not a rootkit... rkhunter tells me, that there is no rootkit installed... i don't know if it is since the machine got installed or came with an update. i've just installed zabbix-agent on the machine and realized, that it didn't work properly... What could have caused such strange permissions (500) and how can i set it back to an normal level (555) ? Crazy, i've never seen something like that... thanks in advance for any help and merry christmas :) see you

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • Prevent nginx from redirecting traffic from https to http when used as a reverse proxy

    - by Chris Pratt
    Here's my abbreviated nginx vhost conf: upstream gunicorn { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; listen 443 ssl; server_name domain.com ~^.+\.domain\.com$; location / { try_files $uri @proxy; } location @proxy { proxy_pass_header Server; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 120; proxy_pass http://gunicorn; } } The same server needs to serve both HTTP and HTTPS, however, when the upstream issues a redirect (for instance, after a form is processed), all HTTPS requests are redirected to HTTP. The only thing I have found that will correct this issue is changing proxy_redirect to the following: proxy_redirect http:// https://; That works wonderfully for requests coming from HTTPS, but if a redirect is issued over HTTP it also redirects that to HTTPS, which is a problem. Out of desperation, I tried: if ($scheme = 'https') { proxy_redirect http:// https://; } But nginx complains that proxy_redirect isn't allowed here. The only other option I can think of is to define the two servers separately and set proxy_redirect only on the SSL one, but then I would have duplicate the rest of the conf (there's a lot in the server directive that I omitted for simplicity sake). I know I could also use an include directive to factor out the redundancy, but I really want to keep just one conf file without any dependencies. So, first, is there something I'm missing that will negate the problem entirely? Or, second, if not, is there any other way (besides including an external file) to factor out the redundant config information so that I can separate out the HTTP and HTTPS versions of the server config?

    Read the article

  • Scripting an 'empty' password in /etc/shadow

    - by paddy
    I've written a script to add CVS and SVN users on a Linux server (Slackware 14.0). This script creates the user if necessary, and either copies the user's SSH key from an existing shell account or generates a new SSH key. Just to be clear, the accounts are specifically for SVN or CVS. So the entry in /home/${username}/.ssh/authorized_keys begins with (using CVS as an example): command="/usr/bin/cvs server",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa ....etc...etc...etc... Actual shell access will never be allowed for these users - they are purely there to provide access to our source repositories via SSH. My problem is that when I add a new user, they get an empty password in /etc/shadow by default. It looks like: paddycvs:!:15679:0:99999:7::: If I leave the shadow file as is (with the !), SSH authentication fails. To enable SSH, I must first run passwd for the new user and enter something. I have two issues with doing that. First, it requires user input which I can't allow in this script. Second, it potentially allows the user to login at the physical terminal (if they have physical access, which they might, and know the secret password -- okay, so that's unlikely). The way I normally prevent users from logging in is to set their shell to /bin/false, but if I do that then SSH doesn't work either! Does anyone have a suggestion for scripting this? Should I simply use sed or something and replace the relevant line in the shadow file with a preset encrypted secret password string? Or is there a better way? Cheers =)

    Read the article

  • virtualbox instances dedicated-server with custom dnsmasq

    - by ovanes
    I have dedicated server where I planned to run virtualbox virtual machines. Since the VMs are managed with vagrant/chef I may end up with many different ones. I thought it would be a great idea to deploy a dnsmasq on the server, which is going to dynamically assign the ip addresses to the VMs. Since each Vagrant/Chef recipe is configured to set the VM's host name I can find/reference the appropriate VM by the host name. Finally, the entire infrastructure is not directly accessible via internet, so the dedicated Server is the OpenVPN host. So the entire infrastructure may be seen as: +-------------------------------------+ | Dedicated Server | | | | +-------------+ +------------+ | +------------------+ | | DNSMasq | | OpenVPN |<==========>| Client | | +-------------+ +------------+ | | | | ^ ^ | +------------------+ | | | | | +--+ | | | | +-------+ | | | | VM1 | | | | +-------+ | | | ... | | | +-------+ | | +-| VM2 | | | +-------+ | +-------------------------------------+ Now some questions which I am struggling with: Are there any other suggestions to access private infrastructure, because I don't want to reinvent the wheel. On the Dedicated Server I don't see the vboxnet0 interface but VirtualBox is installed without GUI. Accessing of virtual boxes via ssh works fine. Did I miss smth? DNSMasq must serve the local VMs only, otherwise there is a chance that local DNSMasq start to serve other server's on the network, what I don't want. Because I don't see vboxnet0 I tend to use no-dhcp-interface=eth0 config option. Are there any thoughts on that despite, the fact that a second NW-card (which is not the case), might start serving DHCP-Requests? How should I config the VM's network interface that I am able to access it via OpenVPN and resolve the hostnames using the DNSMasq. I think it should be the host-only network card. Should I do bridging in the OpenVPN config or is it sufficient to use routing.

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • IPtables: DNAT not working

    - by GetFree
    In a CentOS server I have, I want to forward port 8080 to a third-party webserver. So I added this rule: iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination thirdparty_server_ip:80 But it doesn't seem to work. In an effort to debug the process, I added these two LOG rules: iptables -t mangle -A PREROUTING -p tcp --src my_laptop_ip --dport ! 22 -j LOG --log-level warning --log-prefix "[_REQUEST_COMING_FROM_CLIENT_] " iptables -t nat -A POSTROUTING -p tcp --dst thirdparty_server_ip -j LOG --log-level warning --log-prefix "[_REQUEST_BEING_FORWARDED_] " (the --dport ! 22 part is there just to filter out the SSH traffic so that my log file doesn't get flooded) According to this page the mangle/PREROUTING chain is the first one to process incomming packets and the nat/POSTROUTING chain is the last one to process outgoing packets. And since the nat/PREROUTING chain comes in the middle of the other two, the three rules should do this: the rule in mangle/PREROUTING logs the incomming packets the rule in nat/PREROUTING modifies the packets (it changes the dest IP and port) the rule in nat/POSTROUTING logs the modified packets about to be forwarded Although the first rule does log incomming packets comming from my laptop, the third rule doesn't log the packets which are supposed to be modified by the second rule. It does log, however, packets that are produced in the server, hence I know the two LOG rules are working properly. Why are the packets not being forwarded, or at least why are they not being logged by the third rule? PS: there are no more rules than those three. All other chains in all tables are empty and with policy ACCEPT.

    Read the article

  • How do I set up a Windows NFS share so that I can view it's contents on Linux?

    - by hewhocutsdown
    My NFS server is a Windows XP SP3 box with the Microsoft Windows Services for Unix installed. I have a share configured under C:\NFS with the share name NFS and ANSI encoding. Anonymous access is enabled, with the anon UID/GID set to 0/0. Additionally, I've set ALL MACHINES to Read-Write, and checked the checkbox to Allow root access. My first NFS client is a Ubuntu 10.04 box, with nfs-common installed. Running sudo mount -t nfs 1.1.1.1:/NFS /home/user/NFS succeeds, but when I attempt to view the folder (even as root), it tells me that I do not have the permissions necessary to view the contents of the folder. My second NFS client is an IBM iSeries box running OS/400 V5R3. I used the mount command below: MOUNT TYPE(*NFS) MFS('1.1.1.1:/NFS') MNTOVRDIR('/PARENT/NFS') OPTIONS('rw,nosuid,retry=5,rsize=8096,wsize=8096,timeo=20,retrans=2,acregmin=30,acregmax=60,acdirmin=30,acdirmax=60,soft') CODEPAGE(*BINARY *ASCII) which also mounts successfully. Attempting to WRKLNK '/PARENT/NFS' and use Option 5 to enter the directory yields a Not authorized to object error - even though I am a security officer with the *ALLOBJ special authority. My gut says that it's a problem with the Windows share, but I don't know what it could be. Do you have any suggestions?

    Read the article

  • Virtual Machine Network Architecture, Isolating Public and Private Networks

    - by Mark
    I'm looking for some insight into best practices for network traffic isolation within a virtual environment, specifically under VMWARE ESXi. Currently I have (in testing) 1 hardware server running ESXi but i expect to expand this to multiple pieces of hardware. The current setup is as follows: 1 pfsense VM, this VM accepts all outside (WAN/internet) traffic and performs firewall/port forwarding/NAT functionality. I have multiple public IP addresses sent to the this VM that are used for access to individual servers (via per incoming IP port forwarding rules). This VM is attached to the private (virtual) network that all other VMs are on. It also manages a VPN link into the private network with some access restrictions. This isn't the perimeter firewall but rather the firewall for this virtual pool only. I have 3 VMs that communicate with each other, as well as have some public access requirements: 1 LAMP server running an eCommerce site, public internet accessible 1 accounting server, access via windows server 2008 RDS services for remote access by users 1 inventory/warehouse management server, VPN to client terminals in warehouses These servers constantly talk with each other for data synchronization. Currently all the servers are on the same subnet/virtual network and connected to the internet through the pfsense VM. The pfsense firewall uses port forwarding and NAT to allow outside access to the servers for services and for server access to the internet. My main question is this: Is there a security benefit to adding a second virtual network adapter to each server and controlling traffic such that all server to server communication is on one separate virtual network, while any access to the outside world is routed through the other network adapter, through the firewall, and on the the internet. This is the type of architecture i would use if these were all physical servers, but i'm unsure if the networks being virtual changes the way i should approach locking down this system. Thank you for any thoughts or direction to any appropriate literature.

    Read the article

  • Nginx and Gunicorn hanging on GET requests

    - by whatWhat
    I'm using Nginx + Gunicorn which is serving my Django project. All GET requests hang for ~1 min. The content seems to be available immediately as I can see it in the Browser inspector but the browser itself looks like it's still waiting for more data. Heres my Ngnix config #allow for up to 3 connections per second. limit_req_zone $binary_remote_addr zone=one:10m rate=3r/s; server { listen 80; server_name example.com; root /var/www/example.com/example/; # serve directly - analogous for static/staticfiles location /media/ { # this changes depending on your python version root /home/example/; } location /static/ { # if asset versioning is used if ($query_string) { expires max; } root /var/www/example.com; } location / { #Allow for a burst of 50. limit_req zone=one burst=50 nodelay; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8001/; } # what to serve if upstream is not available or crashes error_page 500 502 503 504 /media/50x.html; } My Gunicorn Config: bind = "127.0.0.1:8001" workers = 3 worker_class = "gevent" Is there anything obvious that would be causing the requests to stay open for so long?

    Read the article

  • Xmonad on windows laptop

    - by Kevin L.
    I'm a Linux developer in the market for a laptop. 90% of my time is spent in Emacs, the terminal, and Google Chrome, and I want to use them within the excellent Xmonad tiling windows manager. Given these constraints, I can only see two options: Run Linux on a laptop Run Windows on the laptop, and spend all of my time working within a Linux VM. Years of experience suggest that the first option will take many frustrating hours and probably be suboptimal w.r.t. battery life, wifi, and fn keys like screen brightness or audio adjustment. For the second option, what would be the ideal setup? I've had a lot of luck with Cooperative Linux on my Samsung NC-10 netbook (Windows XP), but I would have to setup the X11 server myself. What about using VirtualBox (which includes the guest VM's GUI)? Has anyone tried this? Hardware-wise, I'm looking for something in the "Macbook Air killer" category; Samsung Series 9 laptop, Lenovo IdeaPad U300s, &c. (i.e., matte screen, 5h+ battery life, 3ish pound weight). Price is not a consideration; any suggestions?

    Read the article

  • Resuming downloads in Firefox

    - by Kim
    Unfortunately, Firefox still has failed to add the option to resume downloads. I've ran into this problem SO MANY times, and in my previous searches I found posts saying Firefox was going to fix that. As of 3.6.3 they haven't. I just tried Free Download Manager (FDM), again, having the Firefox addon Flashgot use it. The download gets passed to FDM, and fails, giving the error message "access denied, invalid username or password." No password was required. The site I'm trying to get the file from is turbobit.net, which limits downloads speeds to 100kb/sec, and has a 59 second countdown before you get the link. I guess it's transparently using a password on their end. If I just download normally (save to disk) the download starts fine, but it fails after 30 minutes to 1 hour (always different), and my Wi-fi connection will stop briefly - and I have to start all over. So I will never be able to download a large file. I also tried DTA instead of FMD with Flashgot, and I get an "access denied" message in DTA. Again, I reloaded - waited the 59 seconds, and download w/Firefox, and the download starts fine. The failure message in the Firefox Downloads window is "source file at http... could not be read." Any help would be greatly appreciated. When is Firefox going to finally add the ability to resume downloads????? Is there some other software I haven't found using Google that will work?

    Read the article

  • Exchange 2003 ActiveSync problem with certificate

    - by colemanm
    We're having problems getting iPhones to sync properly with SBS 2003 Exchange. When you add a new Exchange ActiveSync account on an iPhone and enter all the pertinent information, it shows a "Verifying Exchange account info" message for a minute or so, then says everything's verified and asks what you want to sync, Mail, Contacts, Calendars... so it looks like it's working. However, when you go to the Mail app and select the Exchange email account, it just shows an "Inbox" folder with nothing in it. When you try refreshing, it attempts for a second, then says "Last Updated" with a timestamp, as if it worked, but there's no mail and no error message/feedback at all. I think I've narrowed it down to some sort of certificate issue, but I'm having trouble finding out where to go from here... I ran MS's Exchange connectivity testing tool with these results: Our cert was purchased from Network Solutions, and I'd already added it to the IIS Default Website for OWA purposes. But this report makes it look like the cert is somehow problematic. I don't know what to do now... Here's a shot of the cert details, just in case:

    Read the article

  • SSL timeout on some sites, across all browsers, on Mac OS X Snow Leopard

    - by dansays
    For the past several weeks, I've been receiving "Error 7 (net::ERR_TIMED_OUT): The operation timed out" when I attempt to connect to either Twitter or Paypal via SSL. I get this specific error in Google Chrome, but the same problem occurs in both Safari and Firefox. Other sites work fine, and other computers on my network can access these two sites. I have no firewall settings that would prevent me from accessing these sites over port 443. I notice that both Twitter and Paypal both have "Verisign Class 3 Extended Validation SSL CA" certificates. It is unclear whether this is related to the problem. In an effort to troubleshoot, I attempted to open the test sites referenced on Verisign's root certificate support page, which worked fine. Just to be sure, I downloaded and installed the root package file and installed all included Verisign certificates. No joy. I feel like I've hit a dead end. Any ideas? Update the first: I also cannot connect to FedEx.com, who also has a Verisign Class 3 Extended Validation cert. Update the second: Aaaaaaand it fixed itself. I did nothing. Or, I did something that worked, but in a delayed fashion. Frustrating, but a win is a win. I'll take it.

    Read the article

  • Disconnect from PHP after output generated

    - by Oli
    I have a LEMP stack. Nginx sitting in front of PHP-FPM. Because some of the sites are heavy and there's OPCode caching, PHP is set up so that there are only 5 child processes running. The aim being that each child can deal with any request in less than half-a-second and then move onto the next request. One problem I've found is that if it's a big chunk of HTML that's getting sent out, and the user has a slow connection, that PHP thread stays occupied until they've finished downloading. Because of my current setup, I have a pretty unforgiving timeout inside PHP where the script is killed after 20 seconds. This is to make sure everybody gets a turn but on a slow connection, this can mean the user gets cut off with a 504 Gateway timeout. I was wondering if there was some sort of buffer solution that I could implement within or just behind Nginx that sent the request through and then... well... buffered the content into its own cache and feed that onto the client as and when they could download it. The aim being that the underlying PHP thread can be freed up. What I'm asking for doesn't have to be PHP-specific. Anything that deals with FastCGI, or even any Nginx-upstream might have a similar issue to this.

    Read the article

  • Show full URI/URL in Chrome's developer tools Network tab

    - by Lev
    When using Chrome to debug, I find it incredibly difficult to be efficient due to the fact that I don't see how I can force the "Network" tab of the developer tools to show the full request URI. It will show the full URI if you hover the link and wait a second, but this is incredibly counterproductive. All of my AJAX requests are sent to ajax.php, and handled by using query string arguments, like: ajax.php?do=profile-set ajax.php?do=game-save ... etc. Since I use AJAX extensively, my network tab is filled with "ajax.php", but I have to manually hover each and every entry to find the request I am looking for. Surely there has got to be another way!? I am constantly fed up by something new in Firefox and immediately force myself back into Chrome, but it is always the developer tools in Chrome that keep me from using it for an extended period of time. Hopefully I can find out how to do this so I can continue using Chrome as my numero uno. I've provided a screen shot to show you where I mean:

    Read the article

  • grep command is not search the complete pattern

    - by Sumit Vedi
    0 down vote favorite I am facing a problem while using the grep command in shell script. Actually I have one file (PCF_STARHUB_20130625_1) which contain below records. SH_5.55916.00.00.100029_20130601_0001_NUC.csv.gz|438|3556691115 SH_5.55916.00.00.100029_20130601_0001_Summary.csv.gz|275|3919504621 SH_5.55916.00.00.100029_20130601_0001_UI.csv.gz|226|593316831 SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_NUC.csv.gz|368|3553014997 SH_5.55916.00.00.100038_20130601_0001_Summary.csv.gz|276|2625719449 SH_5.55916.00.00.100038_20130601_0001_UI.csv.gz|226|3825232121 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 SH_5.75470.00.00.100015_20130601_0001_NUC.csv.gz|425|1627227450 And I have a pattern which is stored in one variable (INPUT_FILE_T), and want to search the pattern from the file (PCF_STARHUB_20130625_1). For that I have used below command INPUT_FILE_T="SH?*???????????????US.*" grep ${INPUT_FILE_T} PCF_STARHUB_20130625_1 The output of above command is coming as below PCF_STARHUB_20130625_1:SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 I have two problem in the output, first is, only one entry is showing in output (It should contain two entries) and second problem is, output contains "PCF_STARHUB_20130625_1:" which should not be came. output should come like below SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 Is there any technique except grep please let me know. Please help me on this issue.

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >