Search Results

Search found 36705 results on 1469 pages for 'update apt xapian index'.

Page 631/1469 | < Previous Page | 627 628 629 630 631 632 633 634 635 636 637 638  | Next Page >

  • Make thunderbird store all mail locally for IMAP accounts but not indexing all for search

    - by rubo77
    In Thunderbird the global gloda search is connected to the selection of downloaded/syncronized folders of the IMAP accounts in the Offline-settings. Is it possible somehow, that Thunderbirds download/syncs all emails in the IMAP account but does not add them to the index for the global search? I would like to do this because I have some accounts that I only keep in thunderbird for archiving reasons but I don't want to find those mails, when I use the global search

    Read the article

  • 403 error with codeignitor

    - by DJB
    When I type in the standard web address for my site, I get a 403 error. However, when I type in a more exact address, say pointing to an index.php file, everything shows up fine. I'm using Anodyne Productions' Nova (SMS 3) which uses codeignitor. All accompanying software (PHP/MySQL) is compatible. I'm not a very technical person, so I'm hoping that this is an easy fix. Thanks for taking the time to answer.

    Read the article

  • How do I prevent lighttpd from caching static files, even when modified on disk?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • Error installing Java on Ubuntu Server

    - by Camran
    I get this error almost when installation is finished: /proc is not mounted; some java apps may fail Could not create the Java virtual machine. Error occurred during initialization of VM Could not reserve enough space for object heap Ignoring error generating classes.jsa Why is this? I just entered sudo apt-get install sun-java6-jre sun-java6-plugin sun-java6-fonts Is there something I must do first? What is Proc? If you need more input let me know. Thanks

    Read the article

  • Failed Software RAID0 on Linux - Attempting to recover data

    - by Gizmo_the_Great
    I have a two disk RAID0 software raid (not hardware raid) that is reported to have failed during boot and my OS won't start. Using a Live CD, I get the following output : sudo mdadm -E /dev/sdc1 /dev/sdd1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 1465145328 (698.64 GiB 750.15 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : ad427cd2:9f885f57:7f41015f:90f8f6af Update Time : Sun Jun 8 12:35:11 2014 Checksum : a37407ff - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 976771056 (465.76 GiB 500.11 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : 2ea0199d:cb08d9e7:0830448a:a1e1e348 Update Time : Sun Jun 8 13:06:19 2014 Checksum : 8883c492 - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) GParted lists both disks, detects the flags as 'Raid' and lists the data usage. Can anyone please help me re-assemble just so that I can copy some of the data off that I have not backed up recently? Thanks

    Read the article

  • Postgres + OpenStreeMap Amazon snapshot

    - by user32425
    Hi, I am trying to start my postgres using /etc/init.d/postgresql-8.3 start but I got this * Starting PostgreSQL 8.3 database server * Error: The server must be started under the locale : which does not exist any more. ...fail! I tried the solution in http://ubuntuforums.org/archive/index.php/t-397005.html but it still does not work. How can I fix this? Thanks

    Read the article

  • E: The package adobe-flashplugin needs to be reinstalled, but I can't find an archive for it. Ubuntu 9.10

    - by Paikkos
    I had some problems with amarok player, I couldn't get any sound from it. Anyway, I fixed it somehow, from terminal, with some packages, after a couple of reboots, now I have problem with the adobe-flashplugin. I'm trying this in terminal but: sudo apt-get install adobe-flashplugin Reading package lists... Done Building dependency tree Reading state information... Done E: The package adobe-flashplugin needs to be reinstalled, but I can't find an archive for it. I also tried the update manager, but I get something about partial upgrade but that thing also isnt working, and I get a message that I have to install it manually or I remove it completely. So, how can I remove it completely and then I will install it clean again from the beggining?

    Read the article

  • How to forward traffic using iptables rules?

    - by ProbablePattern
    I am new to iptables and I have been doing Google searches for a few days now without finding a good solution to this problem. I have computer A with a public ip address (say 192.0.2.1) that can access the Internet unrestricted. I have another computer B with a private ip address (192.168.1.1) that can only access computer A. How do I use iptables to forward network traffic from B through A to the Internet? I need to use http, ftp, and https in order to use apt-get with sudo. Both computers run Ubuntu linux. I have tried using Squid but I think it is far too complicated for what I need to do.

    Read the article

  • Allow connections to only a specific URL via HTTPS with iptables, -m recent (potentially) and -m string (definitely)

    - by The Consumer
    Hello, Let's say that, for example, I want to allow connections only to subdomain.mydomain.com; I have it partially working, but it sometimes gets in a freaky loop with the client key exchange once the Client Hello is allowed. Ah, to make it even more annoying, it's a self-signed certificate, and the page requires authentication, and HTTPS is listening on a non-standard port... So the TCP/SSL Handshake experience will differ greatly for many users. Is -m recent the right route? Is there a more graceful method to allow the complete TCP stream once the string is seen? Here's what I have so far: #iptables -N SSL #iptables -A INPUT -i eth0 -p tcp -j SSL #iptables -A SSL -m recent --set -p tcp --syn --dport 400 #iptables -A SSL -m recent --update -p tcp --tcp-flags PSH,SYN,ACK SYN,ACK --sport 400 #iptables -A SSL -m recent --update -p tcp --tcp-flags PSH,SYN,ACK ACK --dport 400 #iptables -A SSL -m recent --remove -p tcp --tcp-flags PSH,ACK PSH,ACK --dport 400 -m string --algo kmp --string "subdomain.mydomain.com" -j ACCEPT Yes, I have tried to get around this with nginx tweaks, but I can't get nginx to return a 444 or abrupt disconnect before the client hello, if you can think of a way to achieve this instead, I'm all ears, err, eyes. (As suggested by a user, bringing this inquiry over from http://stackoverflow.com/questions/4628157/allow-connections-to-only-a-specific-url-via-https-with-iptables-m-recent-pote)

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • Forensics on Virtual Private servers [closed]

    - by intiha
    So these days with talks about having hacked machines being used for malware spreading and botnet C&C, the one issue that is not clear to me is what do the law enforcement agencies do once they have identified a server as being a source or controller of attack/APT and that server is a VPS on my cluster/datacenter? Do they take away the entire machine? This option seems to have a lot of collateral damage associated with it, so I am not sure what happens and what are the best practices for system admins for helping law enforcement with its job while keeping our jobs!

    Read the article

  • Sticking with Ubuntu 12.04 while heavily using PPA for newest software updates (Apache 2.4, PHP 5.5)

    - by MechaStorm
    I was wondering whether is it worthwhile to stick with Ubuntu 12.04 LTS until 14.04 comes or should I be switching to just the latest Ubuntu server version 13.10. My server needs are not enterprise heavy and previous thought to keep with LTS was simply to gain the security updates without having to upgrade the servers every couple months. But as we are moving forward with our software development, I have found that alot of the default version of software with 12.04 is way out of the date forcing me to up date via PPA or from source instead of from default apt-get. ie PHP 5.3 is on 12.04, and I'd like to get it to 5.5. Is it worthwhile to simply move to 13.10 in that situation? With the idea to move to 14.04 when it comes?

    Read the article

  • php5-suhosin Broken Installation

    - by h00j
    Hi i get the following error when trying to install, have you any idea how to fix? apt-get install php5-suhosin Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-suhosin : Depends: phpapi-20090626 E: Unable to correct problems, you have held broken packages. I get the following error in my apache2 error log [Mon May 07 21:43:15 2012] [error] [client ip] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/suhosin.so' - /usr/lib/php5/20100525/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0

    Read the article

  • Debian/Linux backup files changed by user

    - by verhogen
    I would like to backup my server that is hosting a few websites in such a way that I can restore everything to the way it was from a fresh format. I know that I should backup all the home folders and then probably my /etc/ folders. Is there a way to figure out all the folders that are relevant for backup in that they were not automatically generated or installed from apt-get? It would ideally restore all the users with their current passwords as well. Basically, enough to clone the system but only copying configuration files.

    Read the article

  • Experiences with eXdupe?

    - by ewwhite
    I noticed that the eXdupe compression/archiving/deduplication utility was recently mentioned in another post here. It boasts some interesting features, and I've been playing with it for the past day. It's basically a cross-platform, highly multithreaded archival tool. http://exdupe.com/index.html I'm curious if anyone here uses it in production or has any tips on how to leverage the tool in their environment. I'm looking for suggestions.

    Read the article

  • ssh connection with full tab key support

    - by kusoksna
    I have Ubuntu 10.04 installation. When I open terminal, tab key works fine e.g. i type "apt-get install mysql" then press tab and see all options. But when I connect via ssh - tab key works only before first space. So it will do nothing in above example. I tried connect with different clients (ssh, putty, etc), and always same behavior. My question is: how to make tab key work properly? Is problem in server or client?

    Read the article

  • Windows 7 misses keystrokes from internal keyboard after hibernation (on Acer Aspire 5820)

    - by ron
    I face a very strange symptom on my Acer Aspire laptop (with the factory default Win7 install and divers. Windows update running). After waking the computer from hibernation, it is a pain to type, since on average 5-10 keypresses are missing per 100 presses, using the laptop's keyboard. Steps to reproduce: 1) Power off 2) Power on, wait for system to become usable 3) Open notepad, for 5 times do hit 10x the same character. This gives a similar pattern of 50 chars total: xxxxxxxxxxyyyyyyyyyyaaaaaaaaaassssssssssdddddddddd 4) Optionally repeat. Everything is fine this far. 5) Hibernate. 6) Power on and resume. 7) Repeat steps 3)-4). This time approximately 3-5 character will be missing from each 50 characters. What I ruled out: putting to Sleep or just Locking and resuming from there does not cause problem battery / AC usage does not matter net connection does not matter running processes seem to be the same before and after hibernation key press speed doesn't really matter. For the test I use a nominal 3-5 strokes/second beat. plugging in an external USB keyboard works fine, but the built-in one still misbehaves What could be the problem? How could I diagnose if the keypresses arrive in, but get swallowed at some point? (maybe some nasty keyboard handler hook misbehaves?). Update: It seems that pushing the PowerSmart button and toggling to power saving state fixes the problem. Also, toggling it again back to the original state keeps it fixed. So this may be a fine workaround, but is not a conforming solution.

    Read the article

  • Nginx common configuration that I might have missed

    - by ApPeL
    I recently moved from Apache Mod_wsgi to Nginx, and I have seen a major improvement on speed a lowering on memory usage and I am generally very happy with the it. I am not a server expert, so please be gentle. I am wondering if there are any small configuration that I might have missed, that will cause me some issues in the long run... Please see my nginx.conf file user nginx nginx; worker_processes 4; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen 80; server_name localhost; location /media/ { root /www/django_test1/myapp; # Notice this is the /media folder that we create above } location /mediaadmin/ { alias /opt/python2.6/lib/python2.6/site-packages/django/contrib/admin/media/; # Notice this is the /media folder that we create above } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8080; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; client_max_body_size 100M; } access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log; } }

    Read the article

  • Performance of external USB disk with ESXi5

    - by PeterMmm
    I have a new HP DL120 G7 server with ESXi5. One VM is a Win2003 instalation and I have an external USB2.0 drive attached by USB Controller and USB Device. I copy a 4GB file from external USB to server disk. In the VM that takes up to 10 minutes. On a native Win2003 that takes aprox. 3 minutes. I have no explaination for that diference: In any case the bottleneck is the USB connection, much slower than the disks (SAS, RAID1). If the USB connection on the VM would be USB1.1 and not USB2.0 it would take much more time. (The disk performance between server partitions on the VM is correct. - see update) Could be that my native box is extremely fast and the VM is the normal case. ??? Update I try with passtrough and a first run copy the same data in aprox. 7 minutes. Still 2 times slower than the native connection. I also did another messure and the copy between partitions on the same VM takes 3 minutes.

    Read the article

  • How to prevent Windows 8 of erasing GRUB?

    - by dirleyrls
    I'm doing dualboot with Ubuntu and Windows 8 on my DELL Laptop. EFI is enabled, secure boot is not. My partitions are GPT. Everything seems to work for some time. After some normal use, GRUB stops working. The "ubuntu" EFI entry is still there on top of everything else. But the computer boots directly into the Windows Bootloader, skipping GRUB. Any clues on why is that happening or how can I prevent that? My current partiton setup is: - /dev/sda1 NTFS Windows recovery - /dev/sda2 FAT32 UEFI boot (with boot flag) - /dev/sda3 unknown (msftres flag) - /dev/sda4 NTFS Windows Drive C - /dev/sda5 ext4 /home - /dev/sda6 ext4 / I usually reinstall GRUB through chrooting from a Live Session and doing a apt-get install --reinstall grub-efi-amd64.

    Read the article

  • Apache redirects directories

    - by Ziaix
    So, I'm trying to redirect any pages to a file, but avoid redirecting anything thats an existing file or directory. RewriteEngine On RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^(.+)$ $1 [L] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} !-f RewriteRule ^(.+)$ /index.php?page=$1 [QSA] However, any directories still get redirected (existing files are fine and can be located.)

    Read the article

  • Reliable file copy (move) process - mostly Unix/Linux

    - by mfinni
    Short story : We have a need for a rock-solid reliable file mover process. We have source directories that are often being written to that we need to move files from. The files come in pairs - a big binary, and a small XML index. We get a CTL file that defines these file bundles. There is a process that operates on the files once they are in the destination directory; that gets rid of them when it's done. Would rsync do the best job, or do we need to get more complex? Long story as follows : We have multiple sources to pull from : one set of directories are on a Windows machine (that does have Cygwin and an SSH daemon), and a whole pile of directories are on a set of SFTP servers (Most of these are also Windows.) Our destinations are a list of directories on AIX servers. We used to use a very reliable Perl script on the Windows/Cygwin machine when it was our only source. However, we're working on getting rid of that machine, and there are other sources now, the SFTP servers, that we cannot presently run our own scripts on. For security reasons, we can't run the copy jobs on our AIX servers - they have no access to the source servers. We currently have a homegrown Java program on a Linux machine that uses SFTP to pull from the various new SFTP source directories, copies to a local tmp directory, verifies that everything is present, then copies that to the AIX machines, and then deletes the files from the source. However, we're finding any number of bugs or poorly-handled error checking. None of us are Java experts, so fixing/improving this may be difficult. Concerns for us are: With a remote source (SFTP), will rsync leave alone any file still being written? Some of these files are large. From reading the docs, it seems like rysnc will be very good about not removing the source until the destination is reliably written. Does anyone have experience confirming or disproving this? Additional info We will be concerned about the ingestion process that operates on the files once they are in the destination directory. We don't want it operating on files while we are in the process of copying them; it waits until the small XML index file is present. Our current copy job are supposed to copy the XML file last. Sometimes the network has problems, sometimes the SFTP source servers crap out on us. Sometimes we typo the config files and a destination directory doesn't exist. We never want to lose a file due to this sort of error. We need good logs If you were presented with this, would you just script up some rsync? Or would you build or buy a tool, and if so, what would it be (or what technologies would it use?) I (and others on my team) are decent with Perl.

    Read the article

  • SVN very slow over HTTP (seems auth related)

    - by Sydius
    I'm using SVN version 1.6.6 (r40053) via the command-line in Ubuntu 10.04 and connecting to a remote repository over HTTP that is in the local network. For a while, it worked fine, but has recently become very slow for any operation that requires communication with the repository, however it does eventually work after several minutes (~3m for svn up). Looking at Wireshark, it appears to be taking a full minute between the HTTP auth denied and the subsequent request containing credentials. The issue is local to my machine because other coworkers running Ubuntu are not having the issue and I've tried using my credentials from another machine and it was very fast. I tried deleting the .subversion folder in my home directory and checking everything out fresh, but it didn't help. Update: I think it's auth related. When I check out SVN repositories off of the Internet over HTTP (from Google Code, for example), everything is very fast until I do something that requires a password. Before prompting for the password for the first time, it stalls for at least a minute. Update 2: I set the neon-debug-mask in the SVN settings (in /etc/subversion/servers under [Global]) to 138 and it seems to spending a lot of time on 'auth: Trying Basic challenge...'

    Read the article

  • Windows 7 misses keystrokes from internal keyboard after hibernation (on Acer Aspire 5820)

    - by ron
    I face a very strange symptom on my Acer Aspire laptop (with the factory default Win7 install and divers. Windows update running). After waking the computer from hibernation, it is a pain to type, since on average 5-10 keypresses are missing per 100 presses, using the laptop's keyboard. Steps to reproduce: 1) Power off 2) Power on, wait for system to become usable 3) Open notepad, for 5 times do hit 10x the same character. This gives a similar pattern of 50 chars total: xxxxxxxxxxyyyyyyyyyyaaaaaaaaaassssssssssdddddddddd 4) Optionally repeat. Everything is fine this far. 5) Hibernate. 6) Power on and resume. 7) Repeat steps 3)-4). This time approximately 3-5 character will be missing from each 50 characters. What I ruled out: putting to Sleep or just Locking and resuming from there does not cause problem battery / AC usage does not matter net connection does not matter running processes seem to be the same before and after hibernation key press speed doesn't really matter. For the test I use a nominal 3-5 strokes/second beat. plugging in an external USB keyboard works fine, but the built-in one still misbehaves What could be the problem? How could I diagnose if the keypresses arrive in, but get swallowed at some point? (maybe some nasty keyboard handler hook misbehaves?). Update: It seems that pushing the PowerSmart button and toggling to power saving state fixes the problem. Also, toggling it again back to the original state keeps it fixed. So this may be a fine workaround, but is not a conforming solution.

    Read the article

< Previous Page | 627 628 629 630 631 632 633 634 635 636 637 638  | Next Page >