Search Results

Search found 12058 results on 483 pages for 'abstract syntax tree'.

Page 375/483 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • Issues installing apache debian

    - by Belgin Fish
    I'm having issues installing apache2, and pretty much everything in general, I'm using debian. I run sudo apt-get install apache2 and then it returns root@debian:~# apt-get install apache2 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: apache2 : Depends: apache2-mpm-worker (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-prefork (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-event (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-itk (= 2.2.16-6+squeeze7) but it is not going to be installed Depends: apache2.2-common (= 2.2.16-6+squeeze7) but it is not going to be installed E: Broken packages Not really sure what's up :S Seems like it can't find any of the required packages for anything, Anyone know what I'm doing wrong?

    Read the article

  • How should I monitor memory usage/performance in SunOS/Solaris?

    - by exhuma
    Last week we decided to add some SunOS (uname -a = SunOS bbs-sam-belair 5.10 Generic_127128-11 i86pc i386 i86pc) machines into our running munin instance. First off, the machines are pre-configured appliances, so, I want to avoid touching the system too much without supervision of the service provider. But adding it to munin was fairly easy by writing a small socket-service (if anyone is interested, I put it up on github: https://github.com/munin-monitoring/contrib/tree/master/tools/pypmmn) Yesterday, I implemented/adapted the required plugins for our machines. And here the questions start: First, I have not found a way to determine detailed memory usage values. I get the total memory by running prtconf | grep Memory, and the free memory using vmstat. Fiddling together a munin-plugin, gives me the following graph: This is pretty much uninformative. Compare this to the default plugin for linux nodes which has a lot more detail: Most importantly, this shows me how much memory is actually used by applications. So, first question: Is it possible to get detailed memory information on SunOS with the default system tools (i.e. not using top)? Onto the next puzzle: Seeing the graphs, I noticed activity in the "Paging in/out" graphs, even though the memory graph still has unused memory: Upon further investigation, I found out that df reports that /tmp is mounted on swap. Drilling around on the web, I understood that df will display swap, but in fact, it's mounted as a tmpfs. Now I don't know if this explains the swap activity. The default munin-plugin for solaris uses kstat -p -c misc -m cpu_stat to get these values. I find it already strange that this is using the cpu_stat module. So maybe I simply misinterpret the "paging" graphs? Second question: Do the paging graphs indicate that parts of the memory are paged to disk? Or is the activity caused by file operations in /tmp?

    Read the article

  • Cacti is ignoring hash marks in interface aliases

    - by Matt Simmons
    I'm attempting to set up Cacti to monitor a router's interfaces, and I'm having trouble getting the graph templates to show the information that I'd like. Our interface configuration looks like this: interface GigabitEthernet3/6 description WalljackNumber # Server info no ip address no shutdown switchport switchport access vlan 116 switchport mode access ip dhcp snooping trust spanning-tree portfast The "Server Info" string is really just the machine name, and a short relevant description, such as "PolarSprings vmnic2". The important part appears to be that it follows the hashmark. When I run snmpwalk, I get the proper output: IF-MIB::ifAlias.230 = STRING: WalljackNumber # Server info But in Cacti, when I go into the graph templates and set the title to this: |host_description| - Traffic - |query_ifName| (|query_ifAlias|) All that shows up in the graph is: switchname - Traffic - Gi3/6 (WalljackNumber #) Which strikes me as a little weird. What I suppose MAY be happening is that somewhere in the cacti stream, it's interpreting # as being a comment and stripping everything after, but I'm not sure. I was hoping someone could tell me that this was a known documented behavior, or that I could change it in a setting that I wasn't aware of. The alternative answer is to change the delimiter from # to something else, but I've got over a thousand lit switchports on an old college infrastructure, and I'm not sure what else might be relying on them.

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ >> It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • Alternatives to using email (in particular, Outlook) as a knowledge store?

    - by Umber Ferrule
    I suspect that, like many people, I use my work email account (accessed via Outlook 2007) to store information. I generally try to group similar things in folders and sub-folders, but with a multitude of folders this gets very unwieldy. In particular, it can be a bind to locate things using Outlook's tree structure. (As an aside: I've yet to come across a good free search add-on for Outlook.) I realise Outlook is not the best place to store all my information and I'd prefer not to. In an ideal world I'd like to be able to organise all of the information stored in Outlook in a MindMap (my software of choice being Freemind) or Wiki. To maintain an email audit-trail, I've considered saving individual emails as files using a MindMap or Wiki to link them. What do people think of this? (I can't say I relish the thought of the exporting process!) Whatever I do is going to involve some pain (i.e. setting up a Wiki/MindMap) or sticking with what Outlook provides currently. Has anyone been in the same position? Has anyone mass-migrated information from Outlook? If so, what was the best way? Any ideas or alternative proposals?

    Read the article

  • How do I compile DarWINE in PowerPC Mac OS X 10.4.11...?

    - by Craig W. Davis
    So far I've tried using MacPorts which gives me this error: /Error: Cannot install wine for the arch(s) 'powerpc' because Error: its dependency pkgconfig is only installed for the archs 'i386 ppc'. Error: Unable to execute port: architecture mismatch To report a bug, etc... (I'm not allowed to post two links due to being a new poster...)./ I've also tried using the build script I found in the DarWINE 0.9.12 SDK download that I found on the DarWINE SourceForge.net Project Page... I've also tried the build script that I found at http://code.google.com/p/osxwinebuilder/#Building_Wine_via_the_script... None of these attempts to build DarWINE have actually worked. Whenever I build using the DarWINE build script I run it as follows: /1. I decompress the WINE tarball into ~/Downloads/WINE 2. I cd into ~/Downloads/DarWINE. 3. I run ./winemaker ~/Downloads/WINE/wine-1.2.2 or ./winemaker ~/Downloads/WINE/wine-1.2-rc2 (the reason for trying WINE 1.2-rc2 is that some people managed to get it to build on PowerPC Macs running 10.5.8...)./ I made sure to install Xcode Tools 2.5 & all the SDKs too... The net result is either a syntax type error resulting from trying to run the checked out Google Code DarWINE build script or a bunch of make errors when trying to run the official DarWINE build script that I forcefully extracted from the DarWINE 0.9.12 SDK .dmg file by using Pacifist. I trying to build DarWINE on mid-April 2006 1.42 GHz eMac with DL SuperDrive with Bluetooth 2.0+EDR with 2 GBs of RAM running 10.4.11 as I mentioned earlier... (it came with 10.4.4 on the Mac's Restore DVD-ROM that I ordered from 1-800-SOS-APPL & coconutIdentityCard told me it was made on April 12th 2006 & I know that's right because when I reinstalled Mac OS X 10.4.4 it displayed that it was registered/previously owned by a Hawaiian school...): /make[1]: winegcc: Command not found make[1]: * [main.o] Error 127 make: * [dlls/acledit] Error 2./

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

  • FTP Server on Centos 5.8 - Transfer fails randomly

    - by Diego
    Hi have ProFTPD runningon a brand new CentOS 5.8 server with Plesk, and its behaviour is inconsistent at best. I tried to transfer a directory from my PC, and every time I get a transfer failed on a random file. It's never the same one that fails, it just fails. Sometimes it's a .gif, sometimes it's a .css, sometimes it's a JPG. Of several hundred files, a dozen is always failing for no apparent reason. The error that I get is the following: COMMAND:> [27/11/2012 11:43:52] STOR main_border.gif [27/11/2012 11:43:53] 500 Invalid command: try being more creative ERROR:> [27/11/2012 11:43:53] Syntax error: command unrecognized. The above is just an example, the "command unrecognized" occurs with LIST and other commands as well. Here's the ProFTPD configuration, just in case: ServerName "ProFTPD" #ServerType standalone ServerType inetd DefaultServer on <Global> DefaultRoot ~ psacln AllowOverwrite on </Global> DefaultTransferMode binary UseFtpUsers on TimesGMT off SetEnv TZ :/etc/localtime Port 21 Umask 022 MaxInstances 30 ScoreboardFile /var/run/proftpd/scoreboard TransferLog /usr/local/psa/var/log/xferlog #Change default group for new files and directories in vhosts dir to psacln <Directory /var/www/vhosts> GroupOwner psacln </Directory> # Enable PAM authentication AuthPAM on AuthPAMConfig proftpd IdentLookups off UseReverseDNS off AuthGroupFile /etc/group Include /etc/proftpd.include Note: file /etc/proftpd.include is blank. The above is the default configuration set by Plesk 11. I don't know much of why is that way, my knowledge of Linux System Administration is very basic and the one of ProFTPD is a complete zero. Thanks in advance for the help. Update Issue experienced with CuteFTP and FileZilla. Update Replaced ProFTPd with PureFTPd, issue persists. Sometimes I get "command unrecognized", sometimes "failed to establish data connection". I'm starting to think that it could be a network issue, but I have completely zero knowledge of networking.

    Read the article

  • Apache2.2 not responding on Windows 7 desktop

    - by Adam
    Afternoon! I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't ever time out, the browser just keeps on waiting for a response, which makes me think it's something blocking communication with Apache. Interestingly though, if I stop Apache the requests fail immediately. The Apache service is running, and using netstat I can see it listening on port 80 as configured: TCP 127.0.0.1:80 0.0.0.0:0 LISTENING If I stop the Apache service, that line disappears. I have an entry within my hosts file for each VHost I'm trying, all pointing to 127.0.0.1. Each VHost is configured to *:80. Nothing however is getting recorded in the access or error (at debug level) log files. I've verified the file paths are correct, even though they were never changed. Neither is anything getting recorded within Windows' Event Log. The problem showed up when I added a new VHost and restarted, however I hadn't been using it for a couple of days prior so I don't believe it's the config change. I have performed a syntax check to be sure, and when starting from the command prompt no errors are reported there. I do have Windows Firewall running, however I've verified the Apache rule is correct and tried turning it off to ensure that wasn't the problem. I've reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've also tried using a different port. I'm completely lost for ideas now. Can anybody help? Cheers Adam

    Read the article

  • Why is SMF manifest losing configuration data when exported on SmartOS?

    - by Scott Lowe
    I'm running a server process under SMF (Server Management Facility) on Joyent's Base64 1.8.1 SmartOS image. For those not aqauinted with SmartOS, it is a cloud-based distribution of IllumOS with KVM. But essentially it is like Solaris and inherits from OpenSolaris. So even if you've not used SmartOS, I'm hoping to tap into some Solaris knowledge on ServerFault. My issue is that I want an unprivileged user to be allowed to restart a service that they own. I have worked out how to do that by using RBAC and adding an authorisation to /etc/security/auth_attr and associating that authorisation with my user. I then added the following to my SMF manifest for the service: <property_group name='general' type='framework'> <!-- Allow to be restarted--> <propval name='action_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> <!-- Allow to be started and stopped --> <propval name='value_authorization' type='astring' value='solaris.smf.manage.my-server-process' /> </property_group> And this works well when imported. My unprivileged user is allowed to restart, start and stop its own server process (this is for automated code deployments). However, if I export the SMF manifest, this configuration data is gone... all I see in that section is this: <property_group name='general' type='framework'> <property name='action_authorization' type='astring'/> <property name='value_authorization' type='astring'/> </property_group> Does anybody know why this is happening? Is my syntax wrong, or am I simply not using SMF incorrectly?

    Read the article

  • Debian: Should I add vlan interface into bridge for KVM?

    - by javano
    I am setting up a Debian Squeeze box as a KVM host. I want to add multiple interfaces to each KVM guest so I want them to be on different VLANs. After reading about this, I believe the best method is to add multiple logical VLAN (sub)-interfaces to the physical NICs and then create a bridge adapter for each VLAN interace, and assign each bridge as a NIC for KVM guests. Does this make good sense, or madness? Do I have to use bridged interfaces with KVM like this? Can't I just add eth1.xx and eth1.yy to my interfaces config below and then configure those directly as bridged KVM guest NICs? If so, how should this look in the interfaces config file below? user@host:~$ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # Management Interface auto eth0 iface eth0 inet static address 172.22.0.31 netmask 255.255.255.0 gateway 172.22.0.1 # Interface for guest VMs auto eth1 # Guest1 : Use VLAN 117 auto eth1.117 iface eth1.117 inet manual # Set up br1 for guest 1, bridging with vlan 117 auto br1.117 iface br1.117 inet manual bridge_ports eth1.117 bridge_stp off user@host:~$ uname -a Linux hostname 3.4.9 #1 SMP Wed Aug 22 19:08:46 BST 2012 x86_64 GNU/Linux UPDATE I would really like it if someone could clarify the config for me, as I have also seen the above configured with this syntax, so I don't see why one would be preferred over the other; # Interface for guest VMs auto eth1 allow-hotplug eth1 iface eth1 inet static # Vlan 117 for guest 1 auto vlan 117 iface vlan111 inet static vlan_raw_device eth1 # Guest 1 : NIC 1 auto br1.117 iface br1.117 inet manual bridge_ports vlan117 bridge_stp off

    Read the article

  • How do I create a "here document" within a shell function?

    - by BenU
    I'm working my way through William Shotts Jr.'s great The Linux Command Line on my Mac OSX 10.7.5 system. 90% of the linux that Shotts covers is close enough to Darwin that I can figure out or GTEM to figure out what's going on. I've made it to chapter 27 on "Writing Shell Scripts" and am getting hung up creating "here files" within a function. I get an syntax error: unexpected end of file error when I include the following function: report_uptime () { cat <<- _EOF_ <H2>System Uptime</H2> <PRE>$(uptime)</PRE> _EOF_ return } The error goes away if I use the following function placeholder: report_uptime () { return } Also, elsewhere in the script, outside of a function I use the cat << _EOF_ format to create a "here file" with no trouble: cat << _EOF_ <HTML> <HEAD> <TITLE>$TITLE</TITLE> </HEAD> <BODY> <H1>$TITLE</H1> <P>$TIME_STAMP</P> $(report_uptime) $(report_disk_space) $(report_home_space) </BODY> </HTML> _EOF_ If anyone has any idea what I'm doing wrong I would be grateful!

    Read the article

  • check_snmp warning & critical thresholds with negative values

    - by Oesor
    I'm querying some signal level values measured in dBm, and the SNMP host on the remove device reports the values as negative values, ie, -90 dBm. However, check-snmp seems to be incapable of dealing with negative numbers as part of its threshold values. If I specify the values as part of a collection of OIDs, it accepts the syntax but converts the snmp value to positive, thus always generating a WARNING/CRITICAL result: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::AverageReceiveSNR.0,DEVICE-MIB::CurrentNoiseFloor.0 -w 10:,~:-85 -c 15:,~:-80 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::AverageReceiveSNR.0 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::AverageReceiveSNR.0 = INTEGER: 25 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::AverageReceiveSNR.0 response: = INTEGER: 25 Processing line 2 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP CRITICAL - 25 *97* | DEVICE-MIB::AverageReceiveSNR.0=25 DEVICE-MIB::CurrentNoiseFloor.0=97 If I run it with a single OID, it gives me an error that the format is incorrect: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -w ~:-85 -c ~:-80 -vvvv Range format incorrect And if I run it with no thresholds defined, it works properly and returns the right value. This makes the graphs correct, however it'll never generate a notification when out of range: root@ops-00:/usr/local/nagios/libexec# ./check_snmp -H 192.168.1.100 -o DEVICE-MIB::CurrentNoiseFloor.0 -vvvv /usr/bin/snmpget -t 1 -r 5 -m ALL -v 1 [authpriv] 192.168.1.100:161 DEVICE-MIB::CurrentNoiseFloor.0 DEVICE-MIB::CurrentNoiseFloor.0 = INTEGER: -97 Processing line 1 oidname: DEVICE-MIB::CurrentNoiseFloor.0 response: = INTEGER: -97 SNMP OK - -97 | DEVICE-MIB::CurrentNoiseFloor.0=-97 What am I doing wrong here? How would I, for example, generate a CRITICAL when the noise floor is -80 dBm or higher, a WARNING when it's -85 to -80 dBm, and an OK when -85 dBm or lower? Do I have to write my own SNMP plugins when dealing with negative values?

    Read the article

  • Azure's Ubuntu 12.0.4 fails to install PHP5

    - by Alex Kennberg
    Similar to this article from Azure themselves: http://www.windowsazure.com/en-us/manage/linux/common-tasks/install-lamp-stack/ I am trying to install PHP5 on Ubuntu 12.0.4 virtual machine. However, it fails installing the ssl-cert. $ sudo apt-get install php5 Reading package lists... Done Building dependency tree Reading state information... Done php5 is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 49 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up ssl-cert (1.0.28) ... Could not create certificate. Openssl output was: Generating a 2048 bit RSA private key ............................+++ ...................................................................................................................+++ writing new private key to '/etc/ssl/private/ssl-cert-snakeoil.key' ----- problems making Certificate Request 140320238503584:error:0D07A097:asn1 encoding routines:ASN1_mbstring_ncopy:string too long:a_mbstr.c:154:maxsize=64 dpkg: error processing ssl-cert (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ssl-cert E: Sub-process /usr/bin/dpkg returned an error code (1) Any tips appreciated.

    Read the article

  • Need Corrected htaccess File

    - by Vince Kronlein
    I'm attempting to use a wordpress plugin called WP Fast Cache which creates static html files from all your posts, pages and categories. It creates the following directory structure inside wp-content: wp_fast_cache example.com pagename index.html categoryname postname index.html basically just a nested directory structure and a final index.html for each item. But the htaccess edits it makes are crazy. #start_wp_fast_cache - do not remove this comment <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}x__query__x%{QUERY_STRING}index.html [L] RewriteCond %{REQUEST_METHOD} ^(GET) RewriteCond %{QUERY_STRING} ^$ RewriteCond /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html -f RewriteCond %{HTTP_USER_AGENT} !(iPhone|Windows\sCE|BlackBerry|NetFront|Opera\sMini|Palm\sOS|Blazer|Elaine|^WAP.*$|Plucker|AvantGo|Nokia) RewriteCond %{HTTP_COOKIE} !(wordpress_logged_in) [NC] RewriteRule ^(.*)$ /home/user/public_html/wp-content/wp_fast_cache/%{HTTP_HOST}%{REQUEST_URI}index.html [L] </IfModule> #end_wp_fast_cache No matter how I try and work this out I get a 404 not found. And not the Wordpress 404, and janky apache 404. I need to find the correct syntax to route all requests that don't exist ie: files or directories to: wp-content/wp_fast_cache/hostname/request_uri/ So for example: Page: example.com/about-us/ => wp-content/wp_page_cache/example.com/about-us/index.html Post: example.com/my-category/my-awesome-post/ => wp-content/wp_fast_cache/example.com/my-category/my-awesome-post/index.html Category: example.com/news/ => wp-content/wp_fast_cache/example.com/news/index.html Any help is appreciated.

    Read the article

  • A star vs internet routing pathfinding

    - by alan2here
    In many respects pathfinding algorythms like A star for finding the shortest route though graphs are similar to the pathfinding on the internet when routing trafic. However the pathfinding routers perform seem to have remarkable properties. As I understand it: It's very perfromant. New nodes can be added at any time that use a free address from a finite (not tree like) address space. It's real routing, like A*, theres never any doubling back for example. IP addresses don't have to be geographicly nearby. The network reacts quickly to changes to the networks shape, for example if a line is down. Routers share information and it takes time for new IP's to be registered everywhere, but presumably every router dosn't have to store a list of all the addresses each of it's directions leads most directly to. I can't find this information elsewhere however I don't know where to look or what search tearms to use. I'm looking for a basic, general, high level description to the algorithms workings, from the point of view of an individual router.

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • installing Conkeror on Ubuntu 12.04

    - by Menelaos Perdikeas
    I am reading the instructions on conkeror site (and elsewhere) on how to install conkeror on Ubuntu (I am using Ubuntu 12_04 LTS) and it seems that the correct sequence is: sudo apt-add-repository ppa:xtaran/conkeror sudo apt-get update sudo apt-get install conkeror conkeror-spawn-process-helper The first step (apt-add-repository) seems to execute without a problem, giving the following output: You are about to add the following PPA to your system: Conkeror Debian packages for Ubuntu releases without xulrunner (i.e. for 11.04 Natty and later) More info: https://launchpad.net/~xtaran/+archive/conkeror Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret- keyring /tmp/tmp.Re7pWaDEQF --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv CB29CBE050EB1F371BAB6FE83BE0F86A6D689050 gpg: requesting key 6D689050 from hkp server keyserver.ubuntu.com gpg: key 6D689050: "Launchpad PPA for Axel Beckert" not changed gpg: Total number processed: 1 gpg: unchanged: 1 However, the apt-get update seems unable to fetch packages from the newly added PPA, with its output ending in: Hit http://security.ubuntu.com precise-security/restricted Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Err http://ppa.launchpad.net precise/main Sources 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en_US Err http://ppa.launchpad.net precise/main i386 Packages 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise /main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. Accordingly, apt-get-install conkeror fails with: mperdikeas@mperdikeas:~$ sudo apt-get install conkeror Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package conkeror Any ideas what might be wrong ?

    Read the article

  • Dependencies problems installing openjdk on Ubuntu

    - by Rodnower
    I try to install openjdk-7-jre: sudo apt-get install openjdk-7-jre But I have dependencies problems: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: openjdk-7-jre : Depends: openjdk-7-jre-headless (= 7u7-2.3.2a-0ubuntu0.12.04.1) but it is not going to be installed Depends: libgif4 (>= 4.1.4) but it is not installable Depends: libatk-wrapper-java-jni (>= 0.30.4-0ubuntu2) but it is not installable Recommends: libgnome2-0 but it is not installable Recommends: libgnomevfs2-0 but it is not going to be installed Recommends: ttf-dejavu-extra but it is not installable E: Unable to correct problems, you have held broken packages. This is version of Ubuntu: Ubuntu 12.04.1 LTS I completely don't know how resolve dependencies... Some one can help me? Thank you for ahead.

    Read the article

  • Exchange MSExchangeIS Mailbox Store Error

    - by Bart Silverstrim
    Boss asked me to check to see if I could figure out why he's had to restart the services on the Exchange server three mornings in a row now. While going through the system logs I ran across an error from the MSExchangeIs Mailbox Store, category General, Event 9690. The message said (edited to make generalized): Exchange store 'First Storage Group\Mailbox Store (Servername)': The logical size of this database (the logical size equals the physical size of the .edb file and the .stm file minus the logical free space in each) is 22GB. This database size has exceeded the size limit of 22 GB. This database will be dismounted immediately. Hmm...happened at five in the morning, and I'm thinking this is a pretty good hint that this leads to the culprit. Thing is I'm not an Exchange expert, so I'm still googling around to figure out how to fix the problem. Any better guidance out there? Or am I barking up the wrong binary tree? Exchange System Manager reports that the server is "version 6.5 build 7638.2, SP2", standard, which I believe is Exchange 2003. It's running on Windows Server 2003 R2 Standard, SP2.

    Read the article

  • Can't get rsync over sftp to work

    - by Patrik
    I'm trying to set up a backup system from an Ubuntu server to a Synology NAS (DS413j) using rsync and sftp. I have created a user for this that we can call ubuntu-backup. I have a directory in ubuntu-backup home directory called www where the backup will be saved. I have enabled Network Backup in DSM The user ubuntu-backup has full access to it's home directory Here is my rsync config file on the Synology NAS: #motd file = /etc/rsyncd.motd #log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid lock file = /var/run/rsync.lock use chroot = no [NetBackup] path = /var/services/NetBackup comment = Network Backup Share uid = root gid = root read only = no list = yes charset = utf-8 auth users = root secrets file = /etc/rsyncd.secrets [ubuntu-backup] path = /volume1/homes/ubuntu-backup/www comment = Ubuntu Backup uid = ubuntu-backup gid = users read only = false auth users = ubuntu-backup secrets file = /etc/rsyncd.secrets The permissions on /volume1/homes/ubuntu-backup/www is ubuntu-backup:users 777 Here is the command i'm running. rsync -aHvhiPb /var/www/ [email protected]:./ The result: sending incremental file list ERROR: module is read only rsync error: syntax or usage error (code 1) at main.c(1034) [Receiver=3.0.9] rsync: connection unexpectedly closed (9 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(605) [sender=3.0.9] If I'm running this: rsync -aHvhiPb /var/www/ [email protected] It looks like its sending files. No errors. But I cant find anything on the NAS.

    Read the article

  • Do you known a reputable backup software that can capture ONLY file system structure + attributes, WITHOUT file content

    - by bogdan
    Is there, on Windows, a reputable backup software out there capable of capturing ONLY a file system's directory and file structure, along with each item's attributes, WITHOUT capturing the actual file content (all files should be zero-length in the backup). I thoroughly searched the web for a solution and wasn't able to find one. Scenario when this would be very useful: I have a large drive with a huge amount of files. If the drive dies, I don't care so much about the content in these files (I can always download this content again from the Internet at any time) but I do care HUGELY about the names of the files that were on it, possibly also about their MD5 hashes and other classic file attributes (especially created-date / modified-date). The functionality I need is present to an extent in "media"/file cataloging software (i.e. whereisit) and, to a lesser extent, in a Total Commander set of extensions (DiskDir, DiskDirExtended). The huge drawback with cataloging software is that it's not designed to store previous versions of each item (AFAIK) and, most importantly, it has very weak content backup capabilities. I managed to think of a hack but I hope there's some backup software out there that already has this capability and I just failed to find it, thus this question. The hack: RoboCopy could be used with /CREATE (CREATE directory tree and zero-length files only) or /COPY (what to COPY for files) without the D=Data flag, to clone a directory structure into one where all files are zero-length but have the desired attributes. Then I would backup the cloned directory structure with a reputable backup software. I would really love to avoid a hack like this one, if possible. Thanks, Bogdan

    Read the article

  • Mindtouch broke my Apache2 virtual host configuration.

    - by grenade
    I installed mindtouch using the instructions here and it seems to have broken my Virtual Host configuration. I have several domains running off the same apache instance and this was working fine but now all my domain names resolve to the virtualhost where mindtouch was installed. So mindtouch made all my domain names point to the new mindtouch instance. Grrr! I use debians default virtual host mechanisms (sites-enabled, etc). Does anyone know what apache directive mindtouch is using to ruin my vh setup? I've scoured all the conf files and there is nothing obvious in apache2.conf or httpd.conf that would cause the behaviour. Did it create a sym-link somewhere that I should destroy? I should add that I uninstalled the mindtouch packages already but apache persists in redirecting all domains to the first one mentioned in the sites-enabled folder. thini:~# apache2ctl -S [Wed Jan 05 13:39:11 2011] [warn] NameVirtualHost *:80 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:* www.openancestry.org (/etc/apache2/sites-enabled/openancestry.org:1) *:* www.pragmantra.com (/etc/apache2/sites-enabled/pragmantra.com:1) *:* services.pragmantra.com (/etc/apache2/sites-enabled/services.pragmantra.com:1) *:* www.subversionreports.com (/etc/apache2/sites-enabled/subversionreports.com:1) *:* www.thijssen.ch (/etc/apache2/sites-enabled/thijssen.ch:1) Syntax OK

    Read the article

  • Speakers silent, headphones work in Ubuntu 9.04

    - by CarlF
    I'm running Ubuntu 9.04. Worked fine for months, then I rebooted yesterday after weeks of continuous operation. Now audio won't play through the speakers. The USB headset works fine, but the Conexant audio (CX20549) does not. Weirdly, it thinks it's playing. pavumeter shows appropriate levels, volume looks OK in alsamixer, but no sound. I did find this page: http://www.eugeneteplitsky.com/fixing-silent-pulseaudio-in-ubuntu-9-04/ Unfortunately the advice there doesn't help me. For one thing, the syntax for the alsa-base.conf file is apparently not actually documented anywhere. For another, my chipset isn't listed in the kernel.org docs he links to! EDIT: would upgrading to 9.10 help? Is there a major change in the audio subsystem between 9.04 and 9.10? Any suggestions? EDIT 2: This is stranger than I thought. Sound works normally in Xine, but is silent in Audacity, VLC and mplayer. What the?

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >