Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 341/1051 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • Recompiling yum installation

    - by Saif Bechan
    I have installed Nginx using yum. Now to add modules to the existing installation i have to recompile Nginx from source. How can i recompile a yum installation. There is no source. Should I uninstall the yum package, and then download the source package and recompile it with the module and then install everything and reconfigure it again???

    Read the article

  • monit syntax error : "if 5 restarts within 5 cycles then alert"

    - by omry
    I am trying to get an alert from monit if it fails to restart a service 5 times, but I get a syntax error /etc/monit/monit.d/engine.conf:5: Error: syntax error 'alert' this is the engine.conf file: check process engine with pidfile /var/run/engine.pid group engine start program = "/etc/init.d/engine start" stop program = "/etc/init.d/engine stop" if 5 restarts within 5 cycles then alert any idea what's wrong with it?

    Read the article

  • Out of disk space on 4GB partiton yet it's only using 2GB

    - by Camsoft
    I'm running Ubuntu and have had a problem where the root partition has run out of disk space. When I perform df -h I get the following: Filesystem Size Used Avail Use% Mounted on /dev/sda6 4.6G 4.5G 0 100% / Yet there are only 2GB of files actually using up this partition. I then ran the following df -i and I get the following: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda6 305824 118885 186939 39% / I have no idea what the -i flag does but it clearly shows that only 39% is used. Can anyone explain where my disk space has gone?

    Read the article

  • Connect two subnets without router

    - by Shcheklein
    I got two Comcast routers with two different subnets on each. Every subnet contains 5 static IPs. Two questions: Are there any problems if both routers and machines from both subnets are connected into one switch? Security issues doesn't matter there. I need to know if there are some performance or other problems. Is it possible to make machines from different subnets to see each other if they all are connected into one switch? Some static routing, add ARP records or somethig else ... I just want to avoid configuring second ethernet adaptors, third router or something. And I need to connect these subnets vai high-speed local network.

    Read the article

  • Running scripts from another directory

    - by Desmond Hume
    Quite often, the script I want to execute is not located in my current working directory and I don't really want to leave it. Is it a good practice to run scripts (BASH, Perl etc.) from another directory? Will they usually find all the stuff they need to run properly? If so, what is the best way to run a "distant" script? Is it . /path/to/script or sh /path/to/script and how to use sudo in such cases? This, for example, doesn't work: sudo . /path/to/script

    Read the article

  • Grant a user access to directories shared by root (mod: 770)

    - by Paul Dinham
    I want to grant a user (username: paul) access to all directories shared by root with mod 770. I do it this way: groups root (here comes a list of groups in which root user is) usermod -a -G group1 paul usermod -a -G group2 paul usermod -a -G group3 paul ... All the 'group1', 'group2', 'group3' are seen in the group list of root user. However, after adding 'paul' to all groups above, he still can not write to directories shared by root user with mod 770. Did I do it wrongly?

    Read the article

  • Find out which task is generating a lot of context switches on linux

    - by Gaks
    According to vmstat, my Linux server (2xCore2 Duo 2.5 GHz) is constantly doing around 20k context switches per second. # vmstat 3 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 2 0 7292 249472 82340 2291972 0 0 0 0 0 0 7 13 79 0 0 0 7292 251808 82344 2291968 0 0 0 184 24 20090 1 1 99 0 0 0 7292 251876 82344 2291968 0 0 0 83 17 20157 1 0 99 0 0 0 7292 251876 82344 2291968 0 0 0 73 12 20116 1 0 99 0 ... but uptime shows small load: load average: 0.01, 0.02, 0.01 and top doesn't show any process with high %CPU usage. How do I find out what exactly is generating those context switches? Which process/thread? I tried to analyze pidstat output: # pidstat -w 10 1 12:39:13 PID cswch/s nvcswch/s Command 12:39:23 1 0.20 0.00 init 12:39:23 4 0.20 0.00 ksoftirqd/0 12:39:23 7 1.60 0.00 events/0 12:39:23 8 1.50 0.00 events/1 12:39:23 89 0.50 0.00 kblockd/0 12:39:23 90 0.30 0.00 kblockd/1 12:39:23 995 0.40 0.00 kirqd 12:39:23 997 0.60 0.00 kjournald 12:39:23 1146 0.20 0.00 svscan 12:39:23 2162 5.00 0.00 kjournald 12:39:23 2526 0.20 2.00 postgres 12:39:23 2530 1.00 0.30 postgres 12:39:23 2534 5.00 3.20 postgres 12:39:23 2536 1.40 1.70 postgres 12:39:23 12061 10.59 0.90 postgres 12:39:23 14442 1.50 2.20 postgres 12:39:23 15416 0.20 0.00 monitor 12:39:23 17289 0.10 0.00 syslogd 12:39:23 21776 0.40 0.30 postgres 12:39:23 23638 0.10 0.00 screen 12:39:23 25153 1.00 0.00 sshd 12:39:23 25185 86.61 0.00 daemon1 12:39:23 25190 12.19 35.86 postgres 12:39:23 25295 2.00 0.00 screen 12:39:23 25743 9.99 0.00 daemon2 12:39:23 25747 1.10 3.00 postgres 12:39:23 26968 5.09 0.80 postgres 12:39:23 26969 5.00 0.00 postgres 12:39:23 26970 1.10 0.20 postgres 12:39:23 26971 17.98 1.80 postgres 12:39:23 27607 0.90 0.40 postgres 12:39:23 29338 4.30 0.00 screen 12:39:23 31247 4.10 23.58 postgres 12:39:23 31249 82.92 34.77 postgres 12:39:23 31484 0.20 0.00 pdflush 12:39:23 32097 0.10 0.00 pidstat Looks like some postgresql tasks are doing 10 context swiches per second, but it doesn't all sum up to 20k anyway. Any idea how to dig a little deeper for an answer?

    Read the article

  • SLES AutoYaST Script Validity Verification

    - by Xerxes
    Does anyone here write their own customized AutoYaST scripts for building SLES servers? I'm not talking about generating them with yast2 autoyast. If so, have you found a way to verify the syntax? xmllint is good as far as telling you that the XML syntax is valid, but with an upto date DTD, it can't tell you anything more, and the shipped DTDs are out-of-date. I've opened a ticket with Novell on this, but who knows when and what I'll hear back.

    Read the article

  • sendmail Name server timeout

    - by broody
    Complete sendmail newbie here... I've been trying to get mailing to work in PHP and I've root caused it down to sendmail's complaint about "Name server timeout": >sendmail -t -v >From: [email protected] >To: [email protected] >. gmail.com: Name server timeout [email protected]... Transient parse error -- message queued for future delivery [email protected]... queued So it sounds like a DNS issue? But I can do a "dig mx gmail.com" and it will query successfully. Here's what confuses me... I can get sendmail to work two other ways. The first way is through telnet: >telnet 127.0.0.1 25 >Helo me >Mail from: [email protected] >Rcpt to: [email protected] >. message sent And the second way is by explicitly appending the sendmail.cf, but this is strange because it's the exact same file I use to configure sendmail to begin with: >sendmail -t -v -C/etc/mail/sendmail.cf But none of these solutions will resolve my PHP mailing... I am clueless as to what is going on... appreciate any help.

    Read the article

  • Windows 7 doesn't boot when second hard disk is connected

    - by kenshin9786
    I'm sorry for my bad english on beforehand. I have two hard disks, one SATA and another IDE. I have windows XP and 7 on the SATA one, and Ubuntu on the IDE. Both of them boots and works, bios recognizes them, just works. After I installed Windows 7, and connected the IDE drive, it freezes on "Starting Windows" (the black screen with the Windows logo). I unplug the IDE drive, and it starts normally. Windows XP starts normally on both situations (with or without the IDE one connected), same for Ubuntu (it works with both disks connected or just the IDE where it is). The IDE drive is on good status according to SMART. The IDE is first on boot order. It goes to the Ubuntu's grub first, then by default it goes to the Windows 7 bootloader, and then to XP. I think the problem is not about the bootloader or grub. I just read that it can be solved formatting the "problematic" hard disk because Windows 7 cannot handle so many active partitions or something like that. But that's not an option for me, I don't want to lose my Ubuntu nor have it unbootable. How can I solve this without this consecuences I mentioned? Any help would be appreciated.

    Read the article

  • Program complains not enough disk space even if the disk space exists

    - by user1189899
    I have an EXT3 partition mounted in ordered data mode. If a power failure occurs when a program is creating files on that partition, I see that space usage reported is normal and I don't see any partial written files. But when I try to run the same program again after the system comes back up it complains that there is not enough disk space. Even though the free space reported is far more than required. The program always succeeds in normal conditions. Also the problem seems to disappear when the partition is remounted. I was wondering what could be the right way to handle the situation other than unmounting and remounting.

    Read the article

  • Server configurations for hosting MySQL database

    - by shyam
    I have a web application which uses a MySQL database hosted on a virtual server. I've been using this server when I started the application and when the database was really small. Now it has grown and the server is not able to handle the db, causing frequent db errors. I'm planning to get a server and I need suggestions for that. Like I said, the db is now 9 GB, and is growing considerably fast. There are a number of tables with millions of rows, which are frequently updated and queried. The most frequent error the db shows is Lock wait timeout exceeded. Previously there used to be "The total number of locks exceeds the lock table size" errors too, but I could avoid it by increasing Innodb buffer pool size. Please suggest what configurations should I look for in the server I should buy. I read somewhere that the db should ideally have a buffer pool size greater than the size of its data, so in my case I guess I'd need memory gt 9 GB. What other things should I look for in the server? Just tell me if I should give you more info about the

    Read the article

  • samba4 not building in archlinux.

    - by kmplsv
    cp bin/tdbtool bin/tdbdump bin/tdbbackup /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/bin cp ./include/tdb.h /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/include cp tdb.pc /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/pkgconfig cp libtdb.a libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so rm -f /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 ln -s libtdb.so.1.2.4 /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/lib/libtdb.so.1 mkdir -p /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` cp tdb.so /tmp/yaourt-tmp-root/aur-samba4/pkg/`/tmp/yaourt-tmp-root/aur-samba4/src/bin/python -c "import distutils.sysconfig; print distutils.sysconfig.get_python_lib(1, prefix='/opt/samba4/samba')"` /bin/install -c -d /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8 for I in manpages/*.8; do \ /bin/install -c -m 644 $I /tmp/yaourt-tmp-root/aur-samba4/pkg//opt/samba4/samba/share/man/man8; \ done /bin/install: cannot stat `manpages/*.8': No such file or directory make: *** [installdocs] Error 1 Aborting... ==> ERROR: Makepkg was unable to build samba4. ==> Restart building samba4 ? [y/N] ==> ------------------------------- ==>c any ideas as what is causing my build to fail? i'm assuming it's an issue with manpages but i can't figure out exactly what package it is looking for that i don't have.

    Read the article

  • Nexus functionality is limited after installation

    - by Dmitriy Sukharev
    I have a CentOS based server with Sonatype Nexus 2.0.4-1 installed. The issue is that there are no standard "Artifact Search", "Advanced Search", "Browse Index", "Refresh Index" Nexus features, as well as Artifact Information tab after selection of any artifact (only Maven Information tab). I tried to Google, but was amazed that there're no information about this issue. Actually it looks like all actions I've done are: wget http://www.sonatype.org/downloads/nexus-2.0.4-1-bundle.tar.gz tar -xvf nexus-2.0.4-1-bundle.tar.gz cp -r nexus-2.0.4-1 sonatype-work /opt/ ln -s /opt/nexus-2.0.4-1/* /opt/nexus ln /opt/nexus/bin/nexus /etc/init.d/ chmod 755 /etc/init.d/nexus vim /etc/init.d/nexus NEXUS_HOME=“/opt/nexus” RUN_AS_USER=“nexus” useradd -s /sbin/nologin -d /var/lib/nexus nexus chown -R nexus /opt/nexus/ chown -R nexus /opt/nexus-2.0.4-1/ sudo -u nexus cp /opt/nexus/conf/examples/proxy-https/jetty.xml /opt/nexus/conf/ To force Nexus be available through HTTPS I went to Administration - Server - Application Server Settings as admin and changed Base URL to https:// external IP/nexus and set Force Base URL to true. Any ideas how to get missed Nexus features?

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Squid parent cache for text/html only

    - by Salvador
    How do I configure the squid to only request text/html to the parent cache; right now I am using : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest on the second hand I get a lot of direct request that do not use the parent proxy: some queries go like FIRST_UP_PARENT and some like DIRECT, how do I tell the squid to always use parent for text/html BTW .. is a transparent proxy I have tried : cache_peer 127.0.0.1 parent 8080 0 no-query no-digest acl elhtml req_mime_type -i ^text/html$ acl elhtml req_mime_type -i text/html cache_peer_access 127.0.0.1 allow elhtml cache_peer_access 127.0.0.1 deny all and it does not works Thanks in advance for the help.

    Read the article

  • How do I install a newer version of GTK in Ubuntu without replacing the current one?

    - by William Friesen
    I am trying to compile file-roller from git, but running autogen.sh gives me this error configure: error: Package requirements (gtk+-3.0 >= 2.91.1) were not met: No package 'gtk+-3.0' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables GTK_CFLAGS and GTK_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. I am running Ubuntu Maverick and don't wish to completely replace my current version of gtk, glib, etc. I have tried to compile GTK using the --prefix argument of autogen.sh, but this gives me a similar error about my version of glib. How can I successfully compile file-roller using these new libraries without borking my install?

    Read the article

  • Error removing packages in Ubuntu using Synaptic

    - by ronakin
    I'm using Ubuntu 10.04 and during my tries to free space I've removed several packages such as: openoffice, all editors, and some more packages such as players and printers drivers that I don't need and seem o.k to remove. However, after restart, the graphical interface doesn't load, I'm in the xserver, I have console but not gui. I was wondering if anyone can tell me which packages I should not remove or let me know of dependencies I need to consider when messing with packages? Thanks!

    Read the article

  • rsyslog - regex trouble

    - by benmccann
    I'm trying to setup the logentries service. If a log entry has a token in it then I would like to send it to api.logentries.com:10000. The token is a guid in the format aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee. Right now I'm doing: # If there's a logentries token then send it directly to logentries :msg, regex, ".*[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}.*" & @@api.logentries.com:10000 I checked the rsyslog debug logs and my regex is not matching, but I can't figure out why or how to fix it: 5245.961161378:7fb79b514700: Filter: check for property 'msg' (value ' fb1c507f-2ede-4d7f-a140-2bd8d56e133 - application - [play-akka.actor.default-dispatcher-1] - Found user: 4fb11ea5e4b00a1aeebe2800') regex '.*[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}.*': FALSE

    Read the article

  • Is it possible to mount a disk image, created with dd, to a directory on a mounted external usb hdd?

    - by Keeper Hood
    I have an image of my home (/dev/sda3) partition, which I've created using the "dd" command. dd if=/dev/sda3 of=/path/to/disk.img I've deleted the home partition via gparted in order to enlarge my /dev/root partition. Then I've recreated the /dev/sda3 partition which is smaller in size then the one I've backed up to the image. I was wondering since I have a 2TB external HDD, could it be possible to mount my backed up image on the external HDD and then copy the files into the /home directory. Since the external HDD would be already in a "mounted state", I'm unsure whether this is a good idea, mounting on a mounted device. I'm running Slackware 13.37 (64bit). used ext4 on all the partitions. resized the root partition with gparted live cd. I've tried mount -t ext4 /path/to/disk.img /mng/image -o loop It gave me an fs error (wrong fs type, bad option, bad superblock on dev/loop/0) Then i did dmesg | tail which outputs: EXT4-fs (loop0) : bad geometry: block count 29009610 exceeds size of defice (1679229 blocks) I have no idea what to do, I want to restore my /home data from the image I've backed up.

    Read the article

  • Unable to build Python modules in Mandriva 2010

    - by SteveJ
    I am trying to build a Python module (pyfits) but I get the following error: # python setup.py install /home/steve/src/pyfits-2.2.2/stsci_distutils_hack.py:239: DeprecationWarning: os.popen3 is deprecated. Use the subprocess module. (sin, sout, serr) = os.popen3(cmd) running install error: invalid Python installation: unable to open /usr/lib64/python2.6/config/Makefile (No such file or directory) I get the same error when I try and build other modules so my guess is I am missing a Python development library. I am running Mandriva 2010.0, any suggestions?

    Read the article

  • Antialias not working in ubuntu lucid lynx 10.04

    - by mac
    I have recently upgraded from karmic to lucid (plain ubuntu using gnome). Everything worked fine, but the characters now aren't anti-aliased any more, as you can appreciate from the screenshot: This is what I tried to fix the situation, unluckily without succeeding: Used the regular option pane from System-Preference-Appearance-Font (smoothing, hinting...) Edited the .fonts.conf file Disinstalling (and then re-installing) the mstcorefont package Changing the default Sans font to a font of my liking (e.g. Tahoma) from the abovementioned Appearance options My ubuntu installation is quite standard, with the typical add-ons one might wish for usability. I used the ubuntu start script to make a few tweaks. Thank you in advance for your help! :)

    Read the article

  • Process for configuring network settings on a headless rack mount device

    - by PherricOxide
    I'm with a small company that plans to sell a rack mounted network appliance which is configurable via a web interface (think of a router configuration page sort of deal), and I'm wondering in large data center like environments what the process usually is for the initial setup of such systems. The main question is, if the system is headless, how do you get initial remote access to it? Do companies usually first plug a server into a monitor/keyboard/mouse in order to configure the network settings before mounting it in a rack? How else would they know what the IP address of the machine was if DHCP (and it can't be hard coded because of IP conflict potential)?

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >