Search Results

Search found 13070 results on 523 pages for 'simply tom'.

Page 375/523 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • Outdoor WiFi Mesh Topology vs. Repeaters

    - by IronJaxor
    Here's the current configuration in our organization (which I believe is incorrect): We have a number of Cisco 1500 series AP's (22 in total), that are mounted outdoors to provide seamless WiFi coverage over a large area. Each AP however has its own physical ethernet connection back to the WLC (All the AP's are marked as Root AP's). They are all broadcasting the same SSID. We have tried to stagger the channel selection but because there are only three non-overlapping channels to choose from, and in some areas the density of AP's is quite high, there is multiple places of channel interference. With this configuration we experience 100-150 disconnects from clients every day. (Our clients are mobile so they move throughout the coverage area constantly). My idea is to switch the AP's to the same channel thereby forming a wireless mesh, use the built in functionality of the 1500 series to use 802.11a as the backhaul, designate one or two AP's as root AP's and wire them back to the WLC. Thereby forming a WiFi mesh, which if I'm not mistaken is the point of the 1500 series in the first place! I am however completely new at WiFi networks and wondering if I am simply mistaken in what I believe my proposed changes will enable, or if there is a better way to tackle the WiFi topology.

    Read the article

  • Clicking far away in vim in tmux in urxvt

    - by paps
    I use vim inside tmux inside urxvt, and the mouse works perfectly well for clicking and selecting text, except when I want to click too far to the right. It seems to be related to the distance in number of columns from the left. When I go beyond column ~200 (not sure about the exact number), clicking simply does nothing. Note that it's not related to a vim window: with two vim windows taking ~150 columns each, clicking will not work after the ~50th column in the second window. It's related to the whole vim session. Also note that clicking far away in a big tmux pane (200 columns) works perfectly. In my .tmux.conf I have this line: set -g default-terminal "screen-256color" and in my .vimrc I have this: if &term =~ "^screen" autocmd VimEnter * silent !echo -ne "\033Ptmux;\033\033]12;7\007\033\\" let &t_SI = "\<Esc>Ptmux;\<Esc>\<Esc>]12;5\x7\<Esc>\\" let &t_EI = "\<Esc>Ptmux;\<Esc>\<Esc>]12;7\x7\<Esc>\\" autocmd VimLeave * silent !echo -ne "\033Ptmux;\033\033]12;14\007\033\\" end It changes the cursor's color depending on the editing mode of vim, and it works, meaning that tmux really sets $TERM to "screen-256color" — but I don't know if this has any relevance with my mouse problem. I'm running Ubuntu 12.04, vim 7.3, tmux 1.6 and rxvt-unicode 9.14. Does anybody have an idea about what is causing this problem? Thanks.

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Replacing the LCD panel in a netbook (Asus Eee PC 1005)

    - by neilfein
    Yesterday, I was cleaning up and dropped my Asus Eee 1005PE. The screen is cracked inside (i.e., cracks are visible only when on), and no longer works properly. I booted up with another monitor attached, and the computer itself is fine, but needs a new screen. Best Buy wants at least $250 to repair it (that includes their $150 fee to breathe in the same room as the unit), and Asus was of no help at all. (They're incredibly cagey and won't provide any money numbers at all, not even the cost of the part.) If replacing the LCD is no more trouble than replacing memory or a hard drive, I can do that. It's within my means to buy the part (18G241010402, a TFT LCD), but I'd like to know more about the procedure involved. My question: How does one replace the screen in this unit? Do I simply open the case and swap out the unit, or do I need to disassemble anything else to get to the screen? I don't want to order the part and then end up in a situation like this. Is the case screwed shut, or is it like an iPod where they glue things closed? I know enough about my abilities with a soldering gun to not attempt to solder tiny wires, would any of that be involved?

    Read the article

  • Frequent freezes with constant disk activity on SSD netbook

    - by SamsLembas
    I am running Arch Linux on an HP Mini 1000 with a SSD. The machine is a little under a year old and fairly heavily used. About a month ago the machine started freezing up. During the freezes, the system is almost completely unresponsive, seemingly especially for disk-intensive tasks such as launching an application for the first time since reboot. The disk activity led is always constantly illuminated during the freezes. After somewhere between 30 sec and 3 minutes, the machine returns to normal operation. I am pretty sure that the SSD is the source or the problem. Iotop reports a disk transfer rate of 0 during the freezes, so I think it must be getting "stuck" and simply not performing any r/w during the time. I can't seem to find any information on these symptoms on the Internet, so any input on exactly what might be the cause of this would be greatly appreciated. The machine is under warranty, but I would rather not deal with HP until I actually know what is going on. Thanks.

    Read the article

  • daily rsync backups with hard links, checksums, and a new computer

    - by user75058
    I backup my laptop to a Fedora desktop daily using rsync with hard links. This has worked great for almost a year. I recently purchased a new computer, transferred over my data, and would like to continue backing up this computer daily. However, due to the data transfer from the old laptop to the new laptop, the timestamps have obviously changed, and will thus cause my daily rsync backup to re-transfer all of the data. I thought that by adding the -c (checksum) switch to my rsync backup it would match files based on checksum, instead of timestamp and size, and only transfer those files that are different or not present. This appeared to work, but upon examining the new backup, hard links are not being created, and it appears the files that should be hard linked are simply being copied to the new backup directory from the previous backup directory on the backup server. This is very peculiar behavior to me, and I am having trouble figuring out why this is occurring. Checksums match for files that I think should be hard linked. I have looked through the rsync man page and Google'd around a bit and have been unable to find anything for me to better understand this behavior.

    Read the article

  • Small maximum number of connections on a Linux router

    - by Eugene
    I have a Linux box acting as a router with no iptables or other firewall and no networking applications running on it, just pure router. I've put it in a test environment that generates many TCP connections, each having unique source and destination IP, and those connections go through this router. I'm observing that number of connections successfully created rise to approximately 500 and then no more connections can be created for several minutes, then another 100 connections can be created and there is another pause, and so on. If 10 connections for each source-destination pair are created, then maximum numbers go about 10 times up, so the problem is probably with many connections from different IPs. As traffic is simply routed, it doesn't have to do with number of file descriptors, iptables connection tracking and other things often proposed to check in similar cases. The box has plenty of free RAM and CPU, both NICs are gigabit. The kernel is 2.6.32. I've already tried increasing net.core.*mem_max, net.core.netdev_max_backlog and txqueuelen on both NICs, with completely no effect. What else should I check ? Is there some rate-limit in the kernel itself ?

    Read the article

  • In Windows 7 power management, is it possible to set different sleep settings for different SATA disks?

    - by Ben Voigt
    I'm having an issue with Windows 7 either freezing up or generating a BSOD coming out of sleep. I suspect that it is related to my boot/OS drive, an OCZ Vertex SE SSD, because numerous other Vertex users have reported sleep problems. Notably, if I put the computer to sleep, it almost always wakes correctly. If it goes to sleep after a timeout, it almost always BSODs. I disabled timed sleep and now it freezes when left unattended. My next step is to disable "Put hard disks to sleep after X minutes", but I'd like to change this setting only for the SSD and not for the rotating data disks, which I would like to spin down normally. Does anyone know a place to configure sleep on a per-disk basis? I don't need to set different timeouts on different disks (although that would be nice), simply setting "this disk sleeps" and "sleep is disabled for this disk" would be great. Additional system information: Windows 7 Ultimate x64, Core i5 - P55 chipset, Intel RST drivers are installed. One SSD, two rotating HDD, and a DVD-RW drive are all connected to the Intel SATA ports. I could potentially move some of these to my motherboard's other SATA controller if that would help.

    Read the article

  • Terminal is not letting me make commands unless I hit enter a bunch of times

    - by ninja08
    Whenever I open terminal it normally allows me to immediately begin making commands. Only earlier today I did the setup for github here https://help.github.com/articles/set-up-git And then all of a sudden the thing where I give terminal commands won't allow me to give it commands unless I hit enter a few times. This is what it looks like: Last login: Fri Nov 9 11:43:28 on ttys001 mysql.save: Permission denied mysql.save: Permission denied /Users/Nick/.zshrc:32: command not found:  . ~ git: ? ~ git: ? ~ git: ? See the big space? That's because it simply will never show the ~ git: thing unless I hit enter 3-4 times. Also, it never used to say ~ git: before I did the git setup. I'm not sure what I changed. I've checked the zshrc file and commented everything out to find the line causing the problem. I've done that and it turns out it was the source $ZSH/oh-my-zsh.sh Within the oh-my-zsh.sh file I've commented out each block of code for the file starting at the top and I've found that this block is causing it: # Load the theme if [ "$ZSH_THEME" = "random" ] then themes=($ZSH/themes/*zsh-theme) N=${#themes[@]} ((N=(RANDOM%N)+1)) RANDOM_THEME=${themes[$N]} source "$RANDOM_THEME" echo "[oh-my-zsh] Random theme '$RANDOM_THEME' loaded..." else if [ ! "$ZSH_THEME" = "" ] then if [ -f "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" ] then source "$ZSH_CUSTOM/$ZSH_THEME.zsh-theme" else source "$ZSH/themes/$ZSH_THEME.zsh-theme" fi fi fi

    Read the article

  • Parking domains and avoiding so called "search engine penalities"

    - by senthilkumar-c
    I have purchased two domains from one particular registrar and hosting from GoDaddy. Assume they are domain1.com and domain2.com Assume my hosting IP address is 111.111.111.111 I added both domain1.com and domain2.com in my domain management control panel and gave the same two nameservers for both domains at my registrar's control panel. So, now, both domains should show the same website. When I ping "domain1.com" or "domain2.com" the results say - Pinging domain1.com [111.111.111.111] with 32 bytes of data: Pinging domain2.com [111.111.111.111] with 32 bytes of data: respectively. So, they both point to the same hosting IP. BUT, internally, I have configured IIS to point them to different folders so that different websites are shown. (My hosting plan is expensive and I intend to use the space and bandwidth for many websites). But still, technically, all domains point to same IP address. Is this a bad thing? Is this what is called "domain parking"? I read some search engine forum posts that two domains pointing to the same IP/Website will be penalised by search engines and stuff. I have also read that simply "parking" the domains won't attract penality. I don't know whether what I have done is parking or the so called "wrong" thing. Can someone shed light on what I have done and what I should do? I don't want to be blacklisted by any search engine. P.S. I know this is not a search engine forum, but I am new to website hosting and domains and I am very weak in nearly all technical terms and concepts relating to web hosting and domains. I thought this will be a good place to understand these things.

    Read the article

  • VNC Server that can be used from command line?

    - by jesusiniesta
    I'm looking for a replacement for a custom vnc server that we have been using in my company for a long time. I need a simple executable that can be run from command line by an IT Support software without the user noticing it (our application will warn the user, we don't want him to see we are using that VNC sever). I need it to support Windows and preferably also OSX. The only option I've found is UltraVNC, but I can't configure it from command line to accept loopback connections without authentication. We have already a whole VNC Viewer + VNC Repeater + Bouncers architecture, and the only missing piece is the VNC Server. Do you know any solution you could suggest me? I'm afraid I'll end up developing a new VNC server myself, may be based on an open source one. EDIT: When I said I don't want the user to notice this VNC server, I should have added that I don't want him even noticing the installation. So better if it can be installed silently or can be executed as a portable executalbe (for instance, ultravnc can be installed and ran as a service from command line, or simply executed quietly, with only a notification icon; its problem is that I can't run it without authentication).

    Read the article

  • SSH connection times out unless I tunnel in from a different server-

    - by rm-vanda
    OK, so this just started last week - Whenever we try to connect to our server via ssh (we use sftp, as well) - The connection times out. However, when you ssh to any other server and then ssh into the machine - it works flawlessly. Now, the mindblowing thing is that sometimes the ssh connection will succeed. Moments ago, I tried it from another machine, and then my own, and it worked - only to time out the next go around. Last week, simply restarting the ssh daemon worked, but this week, no such luck. I even went in and changed: /etc/hosts.allow ALL : ALL and /etc/hosts.deny is blank. The firewall config hasn't changed - but I even disabled the firewall to see if that would work - It did, for a moment - before cutting off, again. (ufw is set to "ALLOW" not "LIMIT") When I try SSH'ing in from my phone -- it works, fine -- So, it seems the problem is with our ISP/router/gateway - However, I see no log in the router/gateway that says its blocking our connections - And that wouldn't explain why we can SSH into any other server -- except for this one - from our network --- I truly appreciate any insight that anyone may have on this matter -

    Read the article

  • 150 TB and growing, but how to grow?

    - by seandavi
    My group currently has two largish storage servers, both NAS running debian linux. The first is an all-in-one 24-disk (SATA) server that is several years old. We have two hardware RAIDS set up on it with LVM over those. The second server is 64 disks divided over 4 enclosures, each a hardware RAID 6, connected via external SAS. We use XFS with LVM over that to create 100TB useable storage. All of this works pretty well, but we are outgrowing these systems. Having build two such servers and still growing, we want to build something that allows us more flexibility in terms of future growth, backup options, that behaves better under disk failure (checking the larger filesystem can take a day or more), and can stand up in a heavily concurrent environment (think small computer cluster). We do not have system administration support, so we administer all of this ourselves (we are a genomics lab). So, what we seek is a relatively low-cost, acceptable performance storage solution that will allow future growth and flexible configuration (think ZFS with different pools having different operating characteristics). We are probably outside the realm of a single NAS. We have been thinking about a combination of ZFS (on openindiana, for example) or btrfs per server with glusterfs running on top of that if we do it ourselves. What we are weighing that against is simply biting the bullet and investing in Isilon or 3Par storage solutions. Any suggestions or experiences are appreciated.

    Read the article

  • How can one keep secure regular backups of his desktop on a remote server through aDSL? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • What can be done to improve time synchronization on networks with sporadic internet access?

    - by anregen
    I'm looking for advice setting up time servers for a very non-typical network. I support many closed networks that have occasional access to the internet. A network would get access most days for a few hours, but would frequently go 1-3 weeks blacked-out. The computers/servers on this network are mostly *nix-based, but not all the same flavor. The entire network is mobile, so when it connects, it will have very different hops/latency to internet time servers. The servers on the closed network are powered-off frequently (at least daily). Right now, my gut tells me to use NTP (because I hate re-learning all the stuff that someone else already got working pretty well). But I have several issues, and am looking for someone with experience in this type of strange situation. I currently have no solution in place, I'm simply letting the internal clocks drift. This results in errors of ~600s in a majority of networks. I have seen mismatch worse than 10,000s. Is there something "better" than NTP in this situation? I know NTP likes to have very frequent, consistent access to servers that give nearly identical answers. I won't have that. How many internal NTP servers should I configure, so that during periods of internet blackout, I have internal time that is consistent within the closed network? There is no human access. No matter how large the mismatch, the server(s) must attempt to correct itself. Discrete steps are very bad. No matter how large the mismatch, the correction must be "slewed", not "stepped". I understand that this could take many hours to correct.

    Read the article

  • mysqldump isn't able to export a specific database, phpMyAdmin crashes

    - by Devils Child
    I'm experiencing problems with a database on my server (Note: All other databases work fine). Once I try to export it with mysqldump I get this error: # mysqldump -u root -pXXXXXXXXX databasename > /root/databasename.sql mysqldump: Couldn't execute 'show table status like 'apps'': Lost connection to MySQL server during query (2013) Also, phpMyAdmin throws an error when selecting this database and immediately logs out. However, the web site which uses this database works fine. I can also execute SELECT statements on the table named "apps" from the MySQL shell. I tried restarting the MySQL daemon as well as REPAIR DATABASE and REPAIR TABLE but the problem still persists. I had this problem before, then it disappeared somehow without me doing anything to resolve the issue. Now, the problem is back and I'm simply unable to create a backup of this database. Used software Debian 6.0.7 x64 MySQL 5.1.66-0 MySQL Version: mysql> SHOW VARIABLES LIKE "%version%"; +-------------------------+-------------------+ | Variable_name | Value | +-------------------------+-------------------+ | protocol_version | 10 | | version | 5.1.66-0+squeeze1 | | version_comment | (Debian) | | version_compile_machine | x86_64 | | version_compile_os | debian-linux-gnu | +-------------------------+-------------------+

    Read the article

  • Clone a Windows Installation to a 3TB Hard Drive; MBR to GPT

    - by DanBlakemore
    I have Windows 7 Professional 64-bit installed on my desktop. Unfortunately for me and my wallet my hard drive is failing. I have purchased a 3TB hard drive as a replacement for my current 2TB drive. I would like to avoid as much hassle as possible in moving to this new drive so I would like to copy my current partition to the new drive using Gparted. The problem is that I suspect that my current partition is MBR, and I need GPT on my new drive since it is 3TB. Can I simply copy the MBR partition onto the new disk and then convert it to GPT after the fact (can you even convert the type of a partition)? Or would I need to somehow copy the contents of the partition into a GPT partition on the new drive? How do I go about making this transistion? Also, are there any issues I should be wary of booting to a GPT partition? If it matters, my motherboard is 1 year old as of May, 2012. Edit: My motherboard is 1 day old. My old one does not have UEFI compatibility, so I decided to make an upgrade to Intel today given that I would need a UEFI motherboard to use my new HDD. How much can I use a dying hard drive (bad sectors according to Hitachi Drive Fitness Test)? I have assumed not at all, to be safe.

    Read the article

  • Can I regenerate the rsa key for SSH access to a Cisco router? Or should I completely erase the SSH config?

    - by Josh
    I have a production 2691 that I administer via telnet. I'd like to change that to SSH. Looking at the config, it looks like there have been keys generated in the past. I think the history here is SSH was set up, they had issues connecting, and fell back to telnet. There are a number of crypto entries, including the following: crypto pki trustpoint Gateway-2691.xxx.com enrollment selfsigned subject-name cn=IOS-Gateway-2691.xxx.com revocation-check none rsakeypair Gateway-2691.xxx.com I've also got this going... Gateway-2691#sh ip ssh SSH Disabled - version 1.99 %Please create RSA keys (of atleast 768 bits size) to enable SSH v2. Authentication timeout: 120 secs; Authentication retries: 3 Gateway-2691# My question is simply, can I run crypto key generate rsa again to set it up again? Is there a way to negate or no all of the previous ssh config so that I can start fresh there? I may be asking the wrong questions, as I'm learning here. As for the SSH how-to, I'm sure I can find information in many places. I'm just basically wondering if I need to start fresh, or if I can pick up where the last attempt at SSH config left off.

    Read the article

  • Connecting a Wifi router to receivers with a cable instead of antenna?

    - by 31eee384
    This is a very strange question--I'd go so far as to say it's a stupid question. I'm being told that it is possible to, to describe it briefly, use a cable to connect an access point and a receiver directly to one another. This means that I would unscrew the access point's antenna, and attach one end of a cable to the port. Then, on the wireless receiver, I would also unscrew the antenna and plug in the other side of the cable. I'm being told the connection would work after this, just as a normal Wifi connection would. Bonus mini-question: if this works, would it still work if a splitter were attached to the access point and multiple receivers plugged in to the network? What would happen if I do this? Based on my surprisingly deficient knowledge of radio transmission, I don't think it would work. I would like some help knowing why it won't (or will) though, if possible. This is a somewhat hypothetical question--I realize that Ethernet does this exact job very handily, and I could just throw in a switch instead of the splitter. I simply feel that I should understand this scenario. Thanks for any help you can offer.

    Read the article

  • Apache: getting proxy, rewrite, and SSL to play nice

    - by Rich M
    Hi, I'm having loads of trouble trying to integrate proxy, rewrite, and SSL altogether in Apache 2. A brief history, my application runs on port 8080 and before adding SSL, I used proxy to strip the 8080 from the url's to and from the server. So instead of www.example.com:8080/myapp, the client app accessed everything via www.example.com/myapp Here was the conf the accomplished this: ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> ProxyPass /myapp http://www.example.com:8080/myapp ProxyPassReverse /myapp http://www.example.com:8080/myapp What I'm trying to do now is force all requests to myapp to be HTTPS, and then have those SSL requests follow the same proxy rules that strip out the port number as my application used to. Simply changing the ports 8080 to 8443 in the ProxyPass lines does not accomplish this. Unfortunately I'm not an expert in Apache, and my skills of trial and error are already reaching the end of the line. RewriteEngine On RewriteCond %{HTTPS} off RewriteRule myapp/* https://%{HTTP_HOST}%{REQUEST_URI} ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> SSLProxyEngine on ProxyPass /myapp https://www.example.com:8443/mloyalty ProxyPassReverse /myapp https://www.example.com:8433/mloyalty As this stands, a request to anything on the server other than /myapp load fine with http. If I make a browser http request to /mypp it then redirects to https:// www.example.com:8443/myapp , which is not the desired behavior. Links within the application then resolve to https:// www.example.com/myapp/linkedPage , which is desirable. Browser requests (http and https) to anything one level beyond just /myapp ie. /myapp/mycontext resolve to https:// www.example.com/myapp/mycontext without the port. I'm not sure what other information there is for me to give, but I think my goals should be clear.

    Read the article

  • Allied Telesis router: IP filtering for the LOCAL interface

    - by syneticon-dj
    Given an Allied Telesis router with an AlliedWare OS (2.9.1) I would like to disable access to all management services of the router except for a number of subnets (or alternatively have what is a "management VLAN" with other manufacturers' switch and router models). What I have tried so far: creating a new VLAN and an appropriate IP interface, setting the LOCAL IP into this subnet, creating an IP filter for the IP interface and specifying my exclusion subnets: it simply does not work as intended as I can access the LOCAL IP set from any of the other VLAN interfaces - the traffic is apparently not going through my defined filter set at all creating a new IP filter set and binding it to the LOCAL IP interface: this seems not to affect any kind of traffic at all, the counters for the filter set remain at zero packets setting the Remote Security Officer Level IP address range: this only restricts the ability for a user with the Security Officer privilege level to log in from any but the specified address ranges / subnets. Unfortunately, it does not prevent service availability (and thus DoS capacity) or the ability to log in as a less privileged user (e.g. a "manager") calling technical support: unfortunately no solution so far What I have not tried: creating a filter set for each and every IP interface defined on the router and excluding access to the router's management IP: I would like to reduce the overhead induced by IP filters as the router already is CPU-constrained at times. Setting up filters for every IP interface would mean that each and every traffic packet would have to pass the filters, thus consuming CPU cycles. If by any means possible, I would like to find a different solution.

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • Running WAMP (XAMPP) and LAMP from One SSD, On 64-bit Windows and Linux Machines

    - by nicorellius
    I have an solid state drive that I develop websites on. The reason I do this is because I work on a few different computers. Historically, I created separate developing environments to use for each machine. This was OK, but if the system changed for some reason, eg, new OS install, it was a pain. So I bought a USB 3.0 enclosure and put a solid state drive in there and it's pretty darn fast, which is good. I was working with three Windows machines and I could simply hook up the drive, launch my XAMPP server and away I went, developing websites: using Dreamweaver, Komodo, Notepad++, Eclipse, etc. Recently, however, one of my Windows machines' hard drive went down and instead of going back to Windows in this case, I went with Ububntu 12.04. I have several Ubuntu workstations and servers and I like Linux, so I thought his was a great opportunity to transition. I went to work installing and trying to set up a LAMP server and, besides from XAMPP 64-bit compatibility out of the box, I'm seeing other issues with getting this Linux server running. I will keep trying to resolve this, but in the meantime... my question is, has anyone ever successfully run both WAMP and LAMP from the same SSD (formatted to NTFS)? I'm sure there are lots of barriers to this happening, like local file system, OS libraries, dependencies, etc. But I was thinking it would be cool if it could be done. I'm no expert, so if this is just plain old stupid, please don't hesitate to let me know.

    Read the article

  • How do you enable webcam support in Facebook for Ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in Ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated!

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >