Search Results

Search found 25792 results on 1032 pages for 'map edit'.

Page 810/1032 | < Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >

  • lighttpd on Fedora permission issues

    - by Isaac Gateno
    I'm trying to get started with lighttpd on Fedora 16 to run a RESTful api for development. Right now even with the most basic sample config file I'm getting 404 pages when I know the pages I'm pointing at exist. From reading other questions I'm leaning towards this being a permissions issue, but I'm confused about how lighttpd runs on Fedora. There's a user called "lighttpd" not "www-data"? I can't see this user in the system-config-users tool and I can't su into it to check which permissions it has. I'm trying to point lighttpd to "/var/www/lighttpd" which has some example pages in it. The permissions for the files inside are set to -rw-r--r-- and the permissions for the folder containing them are drwxr-xr-x. Doesn't that mean that any user can view these files? I'm not sure what else I should be checking as I don't have much experience with server configuration. Any help would be appreciated. Edit: I was following the tutorial configuration here so the lighttpd.conf file contains server.document-root = "/var/www/lighttpd/" server.port = 3000 mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) and I was just trying to get the basic example page working.

    Read the article

  • redirecting output from telnet / nc to file in script fails when cron'd

    - by qhartman
    So, I have device on my network which sits there listening on a port for a connection, and when a connection is made it dumps ascii data out. I need to capture that data to a file. I wrote a dead simple shell script that does this: #!/bin/bash #Config Variables. Age is in Days. DATA_ROOT=/root/data FILENAME=data_`date +%F`.dat HOST=device COMPRESS_AGE=3 #Sanity Checks if [ ! -e $DATA_ROOT ] then echo "The directory $DATA_ROOT seems to not exist. Please create it." exit 1 fi if [ -e $DATA_ROOT/$FILENAME ] then echo "You seem to have extracted data already today. Aborting" exit 1 fi #Get Data nc $HOST 2202 > $DATA_ROOT/$FILENAME #Compress old Data find $DATA_ROOT -type f -mtime +$COMPRESS_AGE -exec gzip {} \; exit 0 It works great when I run it by hand, but when I run it from cron, it doesn't capture any of the output. If I replace nc with telnet I see the initial telnet headers about escape sequences and whatnot, but not the data. Ideas? I've tried forcing bash to act like an interactive shell with -i. I've tried redirecting both stderr and stdout. I know it's got to be some silly simple thing, but I'm utterly failing. This is driving me nuts... EDIT I also just noticed that the nc processes from all my previous attempts at this have been siting sleeping, and when I killed them, cron sent me a bunch of non-sensical error messages. At least now I have something to dig into!

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • Can't run utilities/.exe's that use the network from a [DFS] windows share on Windows 2008 servers. Can this be overcome?

    - by Jim Lawhon
    Under Windows Server 2008 I'm unable to run many utilities that use network resources. This works just fine under Windows Server 2003. For example: \\domain\dfs\tools$\bin\sendmail.exe ... \\domain\dfs\tools$\bin\psexec.exe ... echo %_metric% %_value% %_unixtime% | \\domain\dfs\bin\foo$\nc graphite.domain 2003 -w1 Reproducing and maintaining this folder on a large number of servers/vm's is not desirable. Is there a way to allow Windows Server 2008 to run these tools? If so, can this be enabled via GPO or in a fashion that can be scripted during automated builds? Edit: The commands/tools do work just fine, when run from local drives. Edit2: Wget example: d:\scripts\helpers>z:\bin\wget http://www.google.com SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = z:/etc/wgetrc --2011-04-11 00:32:15-- http://www.google.com/ Resolving www.google.com... failed: Host not found. z:\bin\wget: unable to resolve host address `www.google.com' wget can neither use DNS to resolve the IP nor can it use HTTP if provided an IP directly. Edit3: The problem seems to be tied to DFS/DFS shares. Tools run correctly from other normal windows-server file-shares. They also run correctly when run directly from the file-servers behind the DFS. They only fail when we attempt to run them from the DFS UNC path or mapped drives.

    Read the article

  • can't register a soft phone to asterisk11

    - by Tom
    I have a VM (on oracle vbox) running Fedora17. I've installed asterisk 11 on it from sources. I've followed the wiki for installation (https://wiki.asterisk.org/wiki/display/AST/Creating+SIP+Accounts) to the letter. The ip on the VM machine running fedora is 192.168.1.7 and I can ping it from the host machine (Ubuntu 12.04), which is at 192.168.1.2 I've tried registering with ekiga with the following settings: user: [email protected]. Password: verysecretpassword registar: 192.168.1.7 but I'm getting an error "transport fail". Also, while trying to register I'm logged in to the asterisk CLI with verbose level 3 and debug level 4 and nothing appears. some more relevant data: I've added the following code to the end of my sip.conf.sample file: [demo-alice] type=friend host=dynamic secret=verysecretpassword context=users deny=0.0.0.0/0 permit=192.168.1.0/255.255.255.0 [demo-bob] type=friend host=dynamic secret=othersecretpassword context=users deny=0.0.0.0/0 permit=192.168.1.0/255.255.255.0 After I changed the sip.conf.sample file, I've created a copy of it and named it sip.conf. then I logged in to the asterisk CLI and typed sip reload. Then I'm trying to register and ekiga client from my host machine at 192.168.1.2 but it doesn't work and nothing appears on the asterisk CLI while in verbose mode level 3. BTW, If there is missing information about my question, please don't close it. comment about what you need to know and I'll edit it in to the question. tnx.

    Read the article

  • Per-mailbox IMAP settings in Exchange 2003 apply successfully but revert to server default

    - by erictheavg
    The title says most of it. I have a Spiceworks mailbox that connects to our Exchange Server 2003 box via IMAP for receiving help desk issues. But for complicated reasons, I want it to receive those emails in text-only format. So, I discovered that you can just go to: Exchange System Manager Administrative Groups First Administrative Group First Storage Group Mailbox Store Mailboxes Right-click the mailbox, Configure Exchange Features Edit the properties for IMAP Set that mailbox to only receive message bodies as plain text. I click OK, then Next, it reports success, and I assume I'm done. But then when I go right back to where I was, I see that "Use protocol defaults" is still checked. Anyone have a clue why this would be? Some other details: I'm logged in as Administrator when I do this. I can't change this setting for the entire IMAP virtual server because some regular users use it. I only have one IP address to play with, which means I can't create another IMAP virtual server. Any suggestions or ideas are greatly appreciated!

    Read the article

  • One Way Sync with Dropbox?

    - by user244805
    Is there any way I can mirror a dropbox folder to my C drive by just running a portable file? Extra background information because I know you guys hate it when you don't get the entire situation: I go back to University in fall and I need a new storage solution. I decided to use DropBox to sync my tiny University files (< 5 MB). I need to access these files from 4 machines: Windows 7 Home machine Windows 7 University A machine Windows 7 University B machine Android tablet 1 and 4 are a non-issue. The problem lies with 2 and 3. I want to be able to edit my files on 2 and 3 but those machines are not mine. There is an easy fix. Run a portable version of the DropBox syncer on a USB drive. But the problem is that I don't want to carry a USB drive around with me all the time. In that case, I can just run the small portable DropBox syncer off the internet. But where will it to store the files? A temporary directory on the C drive. There is only one issue left: there are hundreds of machines that I will randomly use that fit in categories 2 and 3. My portable DropBox syncer will notice that the temporary directory is empty on each new PC I use and instead of downloading my DropBox folder to the machine, it syncs the other way around i.e. it deletes my entire DropBox. The solution is to mirror my DropBox onto the temporary directory before running the DropBox syncer.

    Read the article

  • Fresh install CentOS 6.4 64b with directadmin slowly consumes all memory and crashes

    - by Coen Ponsen
    Dear server fault community, This is my first question on server fault, i'm new to server (mis)configuration so please forgive me for asking something stupid :) I'm running Directadmin on a CentOS 6.4 64b with 4GB memory and over 10000Gh virtual machine. I migrated my websites because my former vps couldn't keep up anymore. Only half of the websites from this 1GB machine were migrated jet. So the migration is still in progress and already my server crashes every day. The server performance up until that moment is perfect. The directadmin log files show nothing out of the ordinary. Yesterday only the mysql server crashed but it also crashed the entire machine before. The memory usage in DA seems to be normal: directadmin directadmin (pid 3923 22158 22159 22160 22161 22162 )8.75 MB dovecot dovecot (pid 3851 ) 47.8 MB exim exim (pid 1350 ) 1.29 MB httpd (pid 21525 21528 21529 21530 21531 21532 21546 21571 21742 21743 21744 )490.4 MB mysqld mysqld (pid 1299 ) 287.8 MB named named (pid 3807 ) 16.3 MB proftpd proftpd (pid 1481 ) 1.91 MB sshd sshd (pid 1173 21494 ) 5.16 MB Restarting services immediately frees up memory, but slowly over time the memory usage increases(about 24 hours to crash). The commands: # sync # echo 3 > /proc/sys/vm/drop_caches Will free al memory correct. I could just create a cronjob but it seems the wrong way around to me. I can't seem to pinpoint the cause. Any advices, references or tips are highly appreciated! Greetings, Coen edit: free -m : after drop_caches: total used free shared buffers cached Mem: 3830 735 3095 0 0 21 -/+ buffers/cache: 712 3117 Swap: 991 0 991 I'll post another one this evening.

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • Why is MySQL unable to open hosts.allow/hosts.deny?

    - by HonoredMule
    I have a storage server running Nexenta (OpenSolaris kernel, Ubuntu userspace) with MySQL on top of a ZFS storage array, using innodb_file_per_table and ulimit -n set to 8K. mysqltuner.pl confirms the file limit and claims there are 169 files. The following command: pfiles `fuser -c / 2>/dev/null indicates one mysqld process having 485 file/device descriptors (and they're almost all for files) so I don't know how reliable the tuning script is, but it is still way less than 8K and this list also finds no other process which is close to it's limit. The global total number of descriptors in use is around 1K. So what can cause mysqld to be constantly streaming the following errors? [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.allow: Too many open files [date] [host] mysqld[pid]: warning: cannot open /etc/hosts.deny: Too many open files Everything appears to actually be operating fine, but the issue is constantly flooding the admin console and starts right away on a fresh boot (not only reproducible, but always from mysqld and always the hosts files, whose permissions are the default -rw-r--r-- 1 root root). I could, of course, suppress it from the admin console but I'd rather get to the bottom of it and still allow mysqld warnings/errors to reach the admin console. EDIT: not only is the actual file descriptor well within sane limits, the issue also persists (with immediate appearance) even with the file limit raised to 65535 and always only on hosts.allow/deny.

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem. EDIT: In the end, nothing really worked, and I got really annoyed with the state of compiz and its support in Debian. Many nVidia driver revisions have passed and I am using Gnome 3 now, so I am accepting the best answers to this question even though the issue was not resolved.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • Unrelated Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

  • Solutions on how to use an OS X calendar as a more perfect time tracking solution for 5-10 users in a small agency?

    - by jnthnclrk
    I really like OS X's iCal. Entering events is easy with the mouse and it also gives you a very real visual sense of how long tasks take to complete. We often work remotely in our organisation, so we use a few shared calendars between key individuals to provide us with an overview of hours worked, availability & schedule conflicts without too much disruption to our various, hectic workflows. It really is a neat solution, especially on shared tasks. How many times have you tasked a remote colleague and then lost the thread on whether that task was completed or not? With shared calendars you get a much clearer idea of what your people are working on without having to pick up the phone or compose a chat. However, there are a few areas where this approach fails... iCloud syncing often needs to be re-jiggered The "view only" option on shared calendars does not seem to work, which makes all shared calendars editable by others There is no decent reporting with this workflow There is no task categorisation or tagging Things get very busy in iCal when working with more than 2 shared calendars I've looked at a few task management apps like Basecamp and Harvest, but nothing appears to let me edit my calendar natively and then sync with a 3rd party. Interested in solutions to improve the above workflow and enable us to elegantly increase the amount of users.

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • VNC Server that can be used from command line?

    - by jesusiniesta
    I'm looking for a replacement for a custom vnc server that we have been using in my company for a long time. I need a simple executable that can be run from command line by an IT Support software without the user noticing it (our application will warn the user, we don't want him to see we are using that VNC sever). I need it to support Windows and preferably also OSX. The only option I've found is UltraVNC, but I can't configure it from command line to accept loopback connections without authentication. We have already a whole VNC Viewer + VNC Repeater + Bouncers architecture, and the only missing piece is the VNC Server. Do you know any solution you could suggest me? I'm afraid I'll end up developing a new VNC server myself, may be based on an open source one. EDIT: When I said I don't want the user to notice this VNC server, I should have added that I don't want him even noticing the installation. So better if it can be installed silently or can be executed as a portable executalbe (for instance, ultravnc can be installed and ran as a service from command line, or simply executed quietly, with only a notification icon; its problem is that I can't run it without authentication).

    Read the article

  • Strange Photoshop Problem: Can not select, zoom, paint, option button 'locked'

    - by nikcub
    I have a very strange problem with Photoshop. I can not use any of the tools, since the cursor appears 'locked'. If I select v on my keyboard, it goes to the zoom tool, but the cursor does not change. If I select the paintbrush tool, I can only paint if I hold down the option key. This is what the cursor look like (I had to paint it since I couldn't capture it). It is a rectangle with two lines through it. I am running Photoshop CS4 on a Macbook Pro with Mac OS X 10.6.6. Using both the trackpad and an external Logitech MX5000 mouse I see the same issue. This started when I fired up Photoshop today for the first time in a while. I can't remember changing any options or doing anything that could cause this. Is it possible that the option key is somehow locked in place, or there is some equivalent of num lock on? Very strange problem, I would appreciate any help anybody can offer. Edit: To add, the icon remains the same within all the menu options - it never goes back to being just a normal mouse cursor. Also, right click works fine, and if I hold down option, the cursor goes back to normal and I can paint with it. I can't use Marquee, Lasso, Crop, Type etc. even with option held down. When I go into Bridge, it is the same icon.

    Read the article

  • Ubuntu problem - monitor out of range

    - by Kelp
    Hello, I am using an external monitor for my laptop to run Ubuntu with. I just updated Ubuntu today, but when it is about to reach the Ubuntu login screen, then the monitor says "out of range." Now, Ubuntu boots up into the GUI if I unplug my monitor and use my laptop screen, but I prefer to use the external display. I have tried all of the suggestions from my search results in Google. I tried pressing Ctrl + Alt + +, but nothing happens. I tried pressing Ctrl + Alt + -, but nothing happens. I used Ctrl + Alt + F2 to get into a terminal to run the command: sudo dpkg-reconfigure xserver-xorg, but nothing happens. I believe there are supposed to be options to change the settings, but it does not even give me any. I tried to edit /etc/usplash.conf and /nano/etc/usplash.conf, but they do not exist. I did sudo apt-get update and sudo apt-get upgrade hoping that it would install drivers or something to help my situation, but they did not help. My monitor is a Westinghouse 22" LCD with resolution 1680x1050. It has been working for the past few months until I updated it today.

    Read the article

  • What does it mean to install two OS's alongside each other?

    - by Josh
    I currently have Windows 7 installed on my PC. However, I just tried out Ubuntu via booting from a disc and I love it. I want to install it onto my HDD, but I don't want to get rid of Windows 7. I know HOW to do this, but I am a little unsure what the consequences might be. What does it mean to install Ubuntu alongside Windows? Do they share the same resources? Also, I have my HDD already partitioned into two sections, a 70 GB section where Windows is installed and then another 400 GB section where all my data is stored. There is currently 26 GB free on the 70GB partition. I know Ubuntu doesn't take up much space. However, if I install Ubuntu in that space, will I still be able to install programs on Windows in the future? My main concern is that I am going to short-change my hard drive space for future installations. EDIT: I guess another big question I have is if I install a program on one OS, will the other be able to use it?

    Read the article

  • Route web browsing through a separate iterface

    - by tkane
    I'd like to route web browsing through the wlan0 interface and the rest through eth1. Can you please help me with the iptables commands to achieve this. Below is my configuration. Thank you :) Edit: This is about desktop configuration not a web server set up. Basically I want to use one of my connections to browse the web and the other one for everything else. ifconfig: eth1 Link encap:Ethernet HWaddr 00:1d:09:59:80:70 inet addr:192.168.2.164 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:9ff:fe59:8070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4771 (4.7 KB) TX bytes:7081 (7.0 KB) Interrupt:17 wlan0 Link encap:Ethernet HWaddr 00:1c:bf:90:8a:6d inet addr:192.168.1.70 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:bfff:fe90:8a6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:77 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14256 (14.2 KB) TX bytes:14764 (14.7 KB) route: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 1 0 0 eth1 192.168.1.0 * 255.255.255.0 U 2 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 default adsl 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Can I charge USB devices from a powered hub that isn't connected to a PC?

    - by Anodyne
    This will probably sound familiar to most of you... In my home, we have a whole bunch of devices that can be charged via USB (two iPhones, a BlackBerry, an iPod Touch, etc ad nauseam). We also have a bunch of USB chargers, each of which has a single USB port on it. I'd like to have something permanently connected to AC power with at least 4 USB ports on it, so we can just plug devices in and don't need to go looking for a free outlet. So here's the question: if I buy a powered USB hub, will that do the job even if I don't connect it to a PC? Ideally if you have a hub that you can personally verify will be suitable, let me know the manufacturer and model :-) Thanks in advance! EDIT: The solution I eventually went for was this: Kensington 4-Port USB Charger for Mobile Devices (Europe) There's also a US version here: Kensington 4-Port USB Charger for Mobile Devices (USA) It arrived yesterday, so I used it to charge the following devices, all at the same time, overnight last night: 32GB iPhone 3GS 16GB iPhone 3G First-generation iPod Touch Kensington Portable Power Pack for Mobile Devices I can't say anything about the charging speed (as I left it overnight) but all devices were fully charged this morning.

    Read the article

  • How to install wordpress without a web browser

    - by bvandrunen
    What I am trying to do is to automate wordpress website creation for the company I am working on. We have lots of information in our database for our customers and we want to create a wordpress website for each customer. The process works great and we have no trouble with the creation of websites/transfer of data or anything like that. The problem we do have is when we buy a new domain (http://www.newdomain.com) our process breaks (we call a stored procedure which installs all the data after the URL is called to install wordpress) if the domain takes more than 15min to resolve. We have tried doing looping (where the process checks to see if the domain resolves and keeps trying - but eventually if fails). So what we are looking for is to see if there is a way to install an URL without actually having the domain resolve yet. I have seen where possibilities where you can change the wp-config file but this doesn't work since we have more than one domain and it changes the source URL for all the domains. What we really need is just a way for us to manually start the install script through a call either through a database or some other way that doesn't check to see if the domain is resolved or pointing at the server or not. Thank for any suggestions. EDIT: All we do to install wordpress is call this URL: http://"newdomain".com/wp-admin/install.php?step=2 - if you change settings in the backend calling this URL will install wordpress without having to go through the wp-admin/install.php form

    Read the article

  • Windows Home Server 2011, No disks "suitable for a backup destination"

    - by Scott Beeson
    I recently installed Windows Home Server 2011 and love it. However, when I try to set up server backups, it says no suitable disks are available. Initially, before I set up my RAID, it found one of my twin drives and said it would work. Once I set up the mirroring, that one is no longer available (obviously). However, I have an internal SATA 1TB drive and an external USB2.0 1TB drive hooked up. Both are recognized by Disk Management. WHS11 still says nothing suitable for backups. The two drives details are as follows: Edit to clarify: The system partition is on Disk 0, not listed below. The two below are the two that SHOULD be available for system backups. Disk 1: Dynamic "Data" (D:) 931.51 GB NTFS, Healthy Disk 3: Basic 200 MB Healthy (EFI System Partition) "Backup" 930.66 GB NTFS, Healthy (Primary Partition) What's a bit odd is that in Disk Management the "Backup" volume does not show a drive letter, even though I assigned Z: (which is reflected in "My Computer". I also cannot make this a dynamic disk as it says it's unsupported by the device.

    Read the article

  • How can I optimize my ajax calls to deliver at 60ms.

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance? Edit: People, you seemed to be angry that I did a comparison to Google and the question is very generic. Sorry about that. To rephrase: How can I bring down response time from 500 ms to 60 ms provided my server response time is just a fraction of ms. Assume the results are served from Nginx - Varnish with a cache hit. Here are some answers I would like to answer myself assume the response sizes remained more or less the same. Ensure results are http compressed Ensure SPDY if you are on https Ensure you have initcwnd set to 10 and disable slow start on linux machines. Etc. I don’t think I’ll end up with 60 ms at Google level but your collective expertise can help easily shave off a 100 ms and that’s a big win.

    Read the article

< Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >