Search Results

Search found 10878 results on 436 pages for 'changed'.

Page 292/436 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Git clone/push/pull - where's that username comes from?

    - by Kuroki Kaze
    I've set up gitosis and able to pull/push through ssh. Gitosis is installed on Debian Lenny server, I'm using git from windows machine (msysgit). The strange thing, if I enable loglevel = DEBUG in gitosis.conf, I see something like this when doing any actions with gitosis server: D:\Kaze\source\test-project>git pull origin master DEBUG:gitosis.serve.main:Got command "git-upload-pack 'test_project.git'" DEBUG:gitosis.access.haveAccess:Access check for '[email protected]' as 'writable' on 'test_project.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'test_project.git', new value 'test_project' DEBUG:gitosis.group.getMembership:found '[email protected]' in 'test' DEBUG:gitosis.access.haveAccess:Access ok for '[email protected]' as 'writable' on 'test_project' DEBUG:gitosis.access.haveAccess:Using prefix 'repositories' for 'test_project' DEBUG:gitosis.serve.main:Serving git-upload-pack 'repositories/test_project.git' From 192.168.175.128:test_project * branch master -> FETCH_HEAD Already up-to-date. Question is: why am I *[email protected]? This email is in global user.email config variable, too. Yesterday, when the gitosis was installed, it seen me as kaze@KAZE, this is the name under which I was added to gitosis-admin group (and it worked). But today git (or gitosis) started to see me as [email protected]. This is true for all repositories I push or clone. I had to add this address to gitosis.conf directly on server to be able to edit configs again (it worked). There is 2 public keys in keydir: [email protected] and [email protected], their content is identical and they have kaze@KAZE at end. Origin URL looks like git@lennyserver:test_project. Now, the question is - why Git (or gitosis) suddenly decided to call me by email instead of name@machinename? I've changed a couple things trying to set up Gitosis (updated git on server to 1.6.0 for example), but maybe I broke something in my local git installation?

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • Install PHP mcrypt on Red Hat 4

    - by Chris
    I'm having a very hard time getting mcrypt for PHP installed on a Red Hat 4 server. I've downloaded the rpm but it tells me: error: Failed dependencies: php-common(x86-32) = 5.4.7-2.fc18 is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(FileDigests) <= 4.6.0-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 libc.so.6(GLIBC_2.4) is needed by php-mcrypt-5.4.7-2.fc18.i686 libltdl.so.7 is needed by php-mcrypt-5.4.7-2.fc18.i686 rtld(GNU_HASH) is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(PayloadIsXz) <= 5.2-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 So when I try to install one of those packages, they also require another 8 packages. So I'm diving into dependency hell here. Now if I try to compile mcrypt from source, this is what I get: checking for libmcrypt - version >= 2.5.0... no *** Could not run libmcrypt test program, checking why... *** The test program failed to compile or link. See the file config.log for the *** exact error that occured. This usually means LIBMCRYPT was incorrectly installed *** or that you have moved LIBMCRYPT since it was installed. In the latter case, you *** may want to edit the libmcrypt-config script: no configure: error: *** libmcrypt was not found But I was able to install libmcrypt from an rpm packages successfully. Any suggestions? Also, I cannot use up2date as it requires an active paid account from Red Hat and since the staff has changed rather rapidly in the last year where I work, no one knows if there even was any support accounts.

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • Moving Zend Framework 2 from apache to nginx

    - by Aleksander
    I would like to move site that uses Zend Framework 2 from Apache to Nginx. The problem is that site have 6 modules, and apache handles it by aliases defined in httpd-vhosts.conf, #httpd-vhosts.conf <VirtualHost _default_:443> ServerName localhost:443 Alias /develop/cpanel "C:/webapps/develop/mil_catele_cp/public" Alias /develop/docs/tech "C:/webapps/develop/mil_catele_tech_docs/public" Alias /develop/docs "C:/webapps/develop/mil_catele_docs/public" Alias /develop/auth "C:/webapps/develop/mil_catele_auth/public" Alias /develop "C:/webapps/develop/mil_web_dicom_viewer/public" DocumentRoot "C:/webapps/mil_catele_homepage" </VirtualHost> in httpd.conf DocumentRoot is set to C:/webapps. Sites are avialeble at for example localhost/develop/cpanel. Framework handles further routing. In Nginx I was able to make only one site available by specifing root C:/webapps/develop/mil_catele_tech_docs/public; in server block. It works only because docs module don't depend on auth like others, and site was at localhost/. In next attempt: root C:/webapps; location /develop/auth { root C:/webapps/develop/mil_catele_auth/public; try_files $uri $uri/ /develop/mil_catele_auth/public/index.php$is_args$args; } Now as I enter localhost/develop/cpanel it gets to correct index.php but can't find any resources (css,js files). I have no Idea why reference paths in browswer's GET requsts changed to https://localhost/css/bootstrap.css form https://localhost/develop/auth/css/bootstrap.css as it was on apache. This root directive seems not working. Nginx handles php by using fastCGI location ~ \.(php|phtml)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param APPLICATION_ENV production; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } I googled whole day, and found nothing usefull. Can someone help me make this configuration work like on Apache?

    Read the article

  • Server setup scripts, patches and migrations

    - by Ben Swinburne
    I have written some scripts which I use to configure various servers in a uniform way. Each time I deploy a server I run the relevant scripts so that I know they're all configured the same. I then have some patch scripts, which are changes to the originals which I can then run to ensure that modifications to the original set up can be run on each server. E.g. disable.sh - Disable SELinux etc to ensure other scripts all run correctly general.sh - Jailkit, AV, Repos, RKHunter, security tweaks, uninstall unused bits etc web.sh - Installs and configures Apache2 001_update_nr_licence_key.sh - Update a licence key for a piece of software which has changed since its install in general.sh I can run the first 3 without a problem, but when it comes to running patches I am a bit stuck. Is there a sensible way of doing these with some software? My current thought is write to a log file the role of the server be it web or db for example and then note the name of the patch which has run. It could then iterate through a folder to find all patches for that role which it has not yet run and execute them. This seems a bit long winded however. Could someone advise me as to the best way I can keep my servers uniform?

    Read the article

  • installing Conkeror on Ubuntu 12.04

    - by Menelaos Perdikeas
    I am reading the instructions on conkeror site (and elsewhere) on how to install conkeror on Ubuntu (I am using Ubuntu 12_04 LTS) and it seems that the correct sequence is: sudo apt-add-repository ppa:xtaran/conkeror sudo apt-get update sudo apt-get install conkeror conkeror-spawn-process-helper The first step (apt-add-repository) seems to execute without a problem, giving the following output: You are about to add the following PPA to your system: Conkeror Debian packages for Ubuntu releases without xulrunner (i.e. for 11.04 Natty and later) More info: https://launchpad.net/~xtaran/+archive/conkeror Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret- keyring /tmp/tmp.Re7pWaDEQF --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv CB29CBE050EB1F371BAB6FE83BE0F86A6D689050 gpg: requesting key 6D689050 from hkp server keyserver.ubuntu.com gpg: key 6D689050: "Launchpad PPA for Axel Beckert" not changed gpg: Total number processed: 1 gpg: unchanged: 1 However, the apt-get update seems unable to fetch packages from the newly added PPA, with its output ending in: Hit http://security.ubuntu.com precise-security/restricted Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Err http://ppa.launchpad.net precise/main Sources 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en_US Err http://ppa.launchpad.net precise/main i386 Packages 404 Not Found Ign http://extras.ubuntu.com precise/main Translation-en Ign http://ppa.launchpad.net precise/main Translation-en_US Ign http://ppa.launchpad.net precise/main Translation-en W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise /main/source/Sources 404 Not Found W: Failed to fetch http://ppa.launchpad.net/xtaran/conkeror/ubuntu/dists/precise/main/binary-i386/Packages 404 Not Found E: Some index files failed to download. They have been ignored, or old ones used instead. Accordingly, apt-get-install conkeror fails with: mperdikeas@mperdikeas:~$ sudo apt-get install conkeror Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package conkeror Any ideas what might be wrong ?

    Read the article

  • 5GHz vs 2.4GHz dual band router, max mbps

    - by Tallboy
    I've done a fair amount of reading before posting this but there are a few things still unclear. I just bought a Netgear WNDR3700 N600. I understand that 5GHz offers more channels, less interference because of more available channels, wont interfere with a microwave and so on, and also has a shorter range. Currently, my router is broadcasting both signals (for my iPhone on 2.4, and my computer on 5) But my question is What is the max speed of 5GHz in mbps? In the router settings, it allows me to set '300mbps', but I keep reading online that the max is 54. Is this true? I noticed when I set up the router the default for 2.4 was set to 300 and the default for 5 was set to 54, so I changed both to 300. Is this fine as well? I don't see why it wouldn't be maxed out for both by default. On the box it says max rate of 300+300 so I assume this is correct, but it's throttled down so the router isn't stressing in case you have something streaming media 24/7 and slowing the internet down too. What is the max range of 5GHz? my apt is 780 square feet, and the router is in the main living room.

    Read the article

  • File permissions question

    - by Matthew Robert Keable
    I just switched my site's server from Windows to Linux, and am finally able to control file permissions from my ftp. So, seeing that all permissions were 705 by default (and not wanting just anyone to have permission to execute), I went and changed everything to 744. Now, gif and jpg links don't work, pdf download links don't work, php links don't load, and mov files don't play. Conversely, all html files work perfectly. Setting things back doesn't seem to help. Even setting to 777 gets me nowhere. Any ideas on what might be going wrong? I've been googling file permissions all day (solved that problem with the Windows-Linux switch, which has bred a new problem), and I don't think anything I can find has escaped my attention. The site: absis-minas.com Go easy on a n00b. I took up learning php out of interest, and wound up delving into server management issues due to a very simple line of code not working the way it was supposed to. Thanks!

    Read the article

  • CPU / Affinity mask problem in SQL 2005

    - by Robert Moir
    Hi folks, Having a problem with a SQL Server which was virtualised. The CPU mask was set on the physical host for some reason and now advanced options are not available. So I need to reconfigure the CPU affinity mask settings - which are advanced options, so this is blocked because of the affinity mask issue. I've tried doing this from the SQL server in single user command line mode, I've googled and found lots of people with similar problems but no real solution. I'm stumped. Any ideas? Sample commands and output from query analyser below. sp_configure 'show advanced options', 1 GO RECONFIGURE WITH OVERRIDE GO sp_configure 'affinity mask', 0x00000000 GO RECONFIGURE GO ----------------------------------------- Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install. Msg 5832, Level 16, State 1, Line 1 The affinity mask specified does not match the CPU mask on this system. Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51 The configuration option 'affinity mask' does not exist, or it may be an advanced option.

    Read the article

  • Need for explanation: NetBIOS over TCP/IP on VMware network adapter disturbs access to network share

    - by gyrolf
    (Moved here from StackOverflow) Some time ago nearly all workstations in our team (Windows XP SP2) exhibited intermittend but frequent delays when accessing shares on the network. Typically the first access to a share which hadn't been accessed for some time resulted in a nearly frozen workstation for up to 30 seconds. Then everything started working fine again. Using TCPView from Sysinternals I saw that during this delays there was a connection to the netbios-ssn port on the file server which was in state SYN_SENT. First try: Disable NetBIOS over TCP/IP for the intranet network adapter. Problem solved, but I didn't like to manipulate our centrally managed network configuration for the intranet. Second try: Disable NetBIOS over TCP/IP only for the VMWare network adapter (VMNet1 used for host only communications). Problem solved again! My questions: Why does NetBIOS over TCP/IP on one network adapter disturb NetBIOS over TCP/IP on another network adapter? Is this problem specific to VMWare network adapters? Has anybody else seen this phenomen? Additional information: VMWare Workstation version 6.0.3 At the time I started seriously analysing the problem it was no more possible to find out what had been changed to our systems at the time the problems started.

    Read the article

  • Split big Apache log to folder structure

    - by Dough
    I just changed my Apache log behavior because it was making me having very BIG files... So I now use cronolog to split my logs to log/httpd/2012/11/access_2012.11.30.log for exemple, pattern : %Y/%m/access_%Y.%m.%d.log I now want to split my old 42GB file to the same structure but really don't know how to do that efficiently. I tried some simple commands with cat, egrep, awk... but really don't know how to handle all that in a more powerful script. Here is how the log looks like : x.x.237.134 - - [08/Apr/2011:14:43:09 +0200] "GET... x.x.50.15 - - [08/Apr/2011:14:43:09 +0200] "GET... [...] x.x.254.19 - - [28/Feb/2012:15:24:48 +0100] "GET... So I need for yeah line to get : year %Y (ex. 2012) month %m (ex. 11) day %d And to push out the entire line to : %Y/%m/access_%Y.%m.%d.log Can someone give me clues to get that working ? Thanks a lot for your interest.

    Read the article

  • rsync general question

    - by CaptnLenz
    I'm trying to use rsync. At first, everything looks very good: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f+++++++++ Serien/blah.avi <f+++++++++ Serien/blah S01E01 <f+++++++++ Serien/blah - S01E02 <f+++++++++ Serien/blah - S01E03 <f+++++++++ Serien/blah - S01E04 <f+++++++++ Serien/blah - S01E05 <f+++++++++ Serien/blah - S01E06 <f+++++++++ Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 After that, i copied some of the files manually and runned rsync again in dry mode: rsync -Pniahv -e ssh /home/xxx/Videos/ [email protected]:"/shares/Public/Shared\ Videos/" --stats ... <f..tpo.... Serien/blah.avi <f..tpo.... Serien/blah S01E01 <f..tpo.... Serien/blah - S01E02 <f..tpo.... Serien/blah - S01E03 <f..tpo.... Serien/blah - S01E04 <f..tpo.... Serien/blah - S01E05 <f..tpo.... Serien/blah - S01E06 <f..tpo.... Serien/blah - S01E07 ... Number of files: 232 Number of files transferred: 223 Total file size: 118.24G bytes Total transferred file size: 117.51G bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 9.46K File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 10.18K Total bytes received: 712 Why hasn't changed something in the --stats, although only the permissions and the timestamp have to be updated and not the full files need to be copied?

    Read the article

  • Break all hardlinks within a folder

    - by Georges Dupéron
    I have a folder which contains a certain number of files which have hard links (in the same folder or somewhere else), and I want to de-hardlink these files, so they become independant, and changes to their contents won't affect any other file (their link count becomes 1). Below, I give a solution which basically copies each hard link to another location, then move it back in place. However this method seems rather crude and error-prone, so I'd like to know if there is some command which will de-hardlink a file for me. Crude answer : Find files which have hard links (Edit: To also find sockets, etc. that have hardlinks, use find -not -type d -links +1) : find -type f -links +1 A crude method to de-hardlink a file (copy it to another location, and move it back) : Edit: As Celada said, it's best to do a cp -p below, to avoid loosing timestamps and permissions. Edit: Create a temporary directory and copy to a file under it, instead of overwriting a temp file, it minimizes the risk to overwrite some data, though the mv command is still risky (thanks @Tobu). # This is unhardlink.sh set -e for i in "$@"; do temp="$(mktemp -d ./hardlnk-XXXXXXXX)" [ -e "$temp" ] && cp -ip "$i" "$temp/tempcopy" && mv "$temp/tempcopy" "$i" && rmdir "$temp" done So, to un-hardlink all hard links (Edit: changed -type f to -not -type d, see above) : find -not -type d -links +1 -print0 | xargs -0 unhardlink.sh

    Read the article

  • No internet access when using static IP

    - by Endy Tjahjono
    I have just upgraded to Windows 8.1, and after the upgrade process is finished, I can't connect to internet. I tried running the "Troubleshoot problems": It concluded that DHCP needs to be activated: I let it activate DHCP, and I got internet connection back. The problem is I want to set this PC to a certain IP address (the IP address that it has been using all this time). I am also using Hyper-V, which I suspect has something to do with this problem. After I regained internet connection, I tried running one of my Hyper-V VM. From inside the VM I can connect to internet. That VM has static IP address. I also noticed that in "Control Panel\Network and Internet\Network Connections", I usually have a network connection called vEthernet (Realtek PCIe GBE Family Controller Virtual Switch). I didn't find it there after upgrade. How do I set my PC to a static IP while retaining internet access in Windows 8.1? EDIT I have managed to recreate vEthernet (Realtek PCIe GBE Family Controller Virtual Switch) by unchecking Allow management operating system to share this network adapter in Hyper-V's Virtual Switch Manager and then checking it again. But when I changed the adapter to use static IP, it still can't connect to internet. Result of Get-NetAdapter -Name * | fl (with MAC address removed): Name : vEthernet (Realtek PCIe GBE Family Controller Virtual Switch) InterfaceDescription : Hyper-V Virtual Ethernet Adapter #2 InterfaceIndex : 5 MacAddress : 55-55-55-55-55-55 MediaType : 802.3 PhysicalMediaType : Unspecified InterfaceOperationalStatus : Up AdminStatus : Up LinkSpeed(Mbps) : 100 MediaConnectionState : Connected ConnectorPresent : False DriverInformation : Driver Date 2006-06-21 Version 6.3.9600.16384 NDIS 6.40 Name : Ethernet 3 InterfaceDescription : Hyper-V Virtual Ethernet Adapter #3 InterfaceIndex : 6 MacAddress : 55-55-55-55-55-56 MediaType : 802.3 PhysicalMediaType : Unspecified InterfaceOperationalStatus : Up AdminStatus : Up LinkSpeed(Gbps) : 10 MediaConnectionState : Connected ConnectorPresent : False DriverInformation : Driver Date 2006-06-21 Version 6.3.9600.16384 NDIS 6.40 Name : Ethernet InterfaceDescription : Realtek PCIe GBE Family Controller InterfaceIndex : 2 MacAddress : 55-55-55-55-55-57 MediaType : 802.3 PhysicalMediaType : 802.3 InterfaceOperationalStatus : Up AdminStatus : Up LinkSpeed(Mbps) : 100 MediaConnectionState : Connected ConnectorPresent : True DriverInformation : Driver Date 2013-05-10 Version 8.1.510.2013 NDIS 6.30

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Netbook thinks it is a desktop

    - by Narcolapser
    Question: Are, and if so what, there packages for download that I can get netbook to understand it is not a desktop and that it is a netbook. Info: I'm running an Acer Aspire One with ubuntu desktop 9.10. I tried Ubuntu Netbook Remix first but it has graphics issues with the aspire one. So I changed to Ubuntu Desktop. It was the only distro (after debian, centOS, Fedora, and Knoppix all failed me) that I managed to get working. The only thing is that it is having issues doing things that a netbook/laptop should be doing. most notably is that it will run it's battery dead if I close the screen and throw it into my back pack. It seems to just stay fully on and runs it's self to death. also it will lock up some times if I close the screen and come back to it 10 or 20 minutes later. It also won't retain volume settings when I reboot, as well as screen brightness. and just a couple of other things that I can't quite put my finger on, but just seem amiss. like I said, Essentially my netbook thinks it is a desktop, how can I fix this? ~N

    Read the article

  • How do I format a text file for IIS Mailroot Pickup so that it sends an e-mail with attachments?

    - by Ben McCormack
    How do I need to format a text file so that it can be read by an SMTP service to send an e-mail that has an attachment? We have a server where we are using II6 SMTP to send mail from a Pickup folder. The goal is to drop a properly formatted text file into Mailroot\Pickup and then the file will be automatically processed and sent to the correct SMTP recipient. For simple files, this works correctly. Here's an example of a simple file that works (domain names changed): From:[email protected] To:[email protected] Subject:Hello World! Test Body Of The E-mail When I drop a text file containing the above contents into the Mailroot\Pickup folder, it sends correctly. However, I haven't been able to figure out how to get an attachment to work. I found some material that explained how to encode an SMTP attachment and another tool for simple base64 encoding conversion. Using the information on those pages, I came up with the following text: From:[email protected] To:[email protected] Subject:Hello World! MIME-Version: 1.0 Content-Type: text/plain; boundary="Attached" Content-Disposition: inline; --Attached Content-Transfer-Encoding: base64 Content-Type: text/plain; name="attachment.txt" Content-Disposition: attachment; filenamename="attachment.txt" VGhpcyBpcyBhIHRlc3Qgb2Ygc29tZXRoaW5nIHRvIGVuY29kZS4NCk5ldyBsaW5lDQpOZXcgTGlu ZQ0KIkhlbGxvdyIgISEhDQo9PT09ICcgZnNkZnNkZiAxMjM1NDU2MzQzNA== --Attached-- However, when I place the above text in a file and drop it into Mailroot\Pickup, it doesn't send an attachment correctly. Instead, an e-mail shows up with the following in the body of the e-mail: MIME-Version: 1.0 Content-Type: text/plain; boundary="Attached" Content-Disposition: inline; --Attached Content-Transfer-Encoding: base64 Content-Type: text/plain; name="attachment.txt" Content-Disposition: attachment; filenamename="attachment.txt" VGhpcyBpcyBhIHRlc3Qgb2Ygc29tZXRoaW5nIHRvIGVuY29kZS4NCk5ldyBsaW5lDQpOZXcgTGlu ZQ0KIkhlbGxvdyIgISEhDQo9PT09ICcgZnNkZnNkZiAxMjM1NDU2MzQzNA== --Attached-- I can't figure out what I need to do to format the text file so that the SMTP service correctly sends attachments.

    Read the article

  • Cannot change PostgreSQL port

    - by Jerec TheSith
    I run Postgresql 8.4 as a service on a CentOS 6.2 server. I set port = 21444 and listen_addresses = '*' in /var/lib/pgsql/data/postgresql.conf and I changed 5432 to 21444 in postmaster.opts and restarted postgres, but when I run netstat -lntp postgresql is still running on port 5432 tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 20276/postmaster When I restart postgresql I get a writting error warning on /proc/self/oom_adj, but the service starts anyway. I read that we could get this error when using virtualized servers, but I don't really know if this has inpact on postgresql listening port. The correct pgsql config file is loaded in /var/lib/pgsql/data : [root@srv02 ~]# ps -ef | grep postgres root 1358 22140 0 09:42 pts/0 00:00:00 grep postgres postgres 9519 1 0 Mar16 ? 00:00:01 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data postgres 9573 9519 0 Mar16 ? 00:00:00 postgres: logger process postgres 9575 9519 0 Mar16 ? 00:00:05 postgres: writer process postgres 9576 9519 0 Mar16 ? 00:00:03 postgres: wal writer process postgres 9577 9519 0 Mar16 ? 00:00:01 postgres: autovacuum launcher process postgres 9578 9519 0 Mar16 ? 00:00:01 postgres: stats collector process any thought ? thanks, Jerec

    Read the article

  • PC only boots from Linux-based media and won't boot from DOS-based media

    - by Xolstice
    I have this problem where the PC only seems to boot from a floppy disk or CD if it was created as a Linux-based bootable media. If it was created as a DOS-based bootable media the system just freezes at the starting point of the boot process. I originally asked this under question 139515 for CD booting only, and based on the given answers, I was under the impression the problem was with the CD-ROM drive; however, I have since installed a newly purchased CD-ROM drive and the same freezing occurs. This then made me try the DOS bootable floppy disk approach and I was quite surprised that it exhibited the same freezing problem. I then tried try a Linux bootable floppy and everything booted from it without any issues. As I mentioned in my original question, the PC was booting just fine from the DOS-based bootable CD, and then it suddenly decides to pull this freezing stunt. I can't remember if I changed anything in the BIOS settings that may I have caused the problem, but I am wondering if that could be the case - it is currently using the Award Module BIOS v4.60PGMA. Can anyone help?

    Read the article

  • Getting file not found error with pdebuild

    - by user35042
    I am attempting to build a Debian package using pdebuild on my main development server (running Debian wheezy). Here is the command I run: pdebuild --pbuilder cowbuilder --buildresult .. \ --debbuildopts -i -- \ --basepath /var/cache/pbuilder/base-wheezy.cow \ --distribution wheezy --configfile /etc/pbuilder/wheezy This works on other servers, but on one server I get this output: I: using cowbuilder as pbuilder dpkg-buildpackage: source package libexample-orange-util-perl dpkg-buildpackage: source version 0.08 dpkg-buildpackage: source changed by John User <[email protected]> dpkg-source -i --before-build libexample-orange-util-perl fakeroot debian/rules clean dh clean dh_testdir dh_auto_clean dh_clean dpkg-source -i -b libexample-orange-util-perl dpkg-source: info: using source format `3.0 (native)' dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.tar.gz dpkg-source: info: building libexample-orange-util-perl in libexample-orange-util-perl_0.08.dsc dpkg-genchanges -S >../libexample-orange-util-perl_0.08_source.changes dpkg-genchanges: including full source code in upload dpkg-source -i --after-build libexample-orange-util-perl dpkg-buildpackage: source only upload: Debian-native package File not found: ../libexample-orange-util-perl_0.08.dsc There is no file ../libexample-orange-util-perl_0.08.dsc, but on other build servers no such file is needed (it gets created by the package build). What is causing this "file not found" error?

    Read the article

  • All websites migrated from server running IIS6 to IIS7

    - by Leah
    Hi, I hope someone will be able to help me with this. We have recently migrated all of our clients' sites to a new server running IIS7 - all the sites were originally running on a server running IIS6. Ever since the migration, lots of our clients are reporting error messages. There seems to be quite a number of issues related to sending emails and also, we have had the following error message reported by several different clients: Server Error in '/' Application. -------------------------------------------------------------------------------- Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. I have read elsewhere that this error can appear if a button is clicked before the whole page has finished loading. But as this error has now appeared on multiple sites and only since the server migration, it seems to me that it must be something else. I was wondering if someone could tell me if there is something specific which needs to be changed for .NET sites when sites are moved from a server running IIS6 to a server running IIS7? I don't deal with the actual servers very much so I'm afraid this is very much a grey area for me. Any help would be very much appreciated.

    Read the article

  • Group policy not applying to security group

    - by ihavenoideawhatimdoing
    Preface: I have enough privileges to create GPOs in my OU, and have made a few of them for some simple tasks (like deploying a printer to certain users). Not actually a sysadmin...I'm a developer who is winging it. I wanted to create a GPO that would set a mapped folder for a certain security group (which I recently created and that contains only myself). Did the following: Created the GPO in MyOU - Users Removed the default Authenticted Users under Security Filtering Add the security group with my account to Security Filtering Set up the mapping via the User Configuration option Changed GPO Status to "Computer configuration settings disabled" Left WMI filtering to Closed the GPO at this point... Logged in as the target user; ran gpupdate /force Logged out, logged in, ran gpresult /r, no mention of my GPO Rebooted Logged in, re-ran gpupdate /force Logged out, logged in, ran gpresult /r, still no mention of my GPO If I log in with another completely different user, their RSOP information shows that the new GPO is being ignored due to a security restriction, so it appears to be "working" for other users. I just can't get it to actually show up in RSOP for the user it should be working. Is there anything else I can do short of rebooting endlessly and crossing my fingers?

    Read the article

  • Keyboard's media keys are blocked by a program

    - by Mike Hanson
    I've got a Microsoft Natural Ergonomic Keyboard 4000. In addition to the regular keys, it's also got keys for Web/Home, Search, Mail, Favorites (5), Calculator, and Media functions (Mute, Volume Up/Down, and Play/Pause). Everything works most of the time, and the exception is rather odd. I use a programming system called Clarion. When that has focus, the Media keys don't work. (All the others still do.) I've also discovered that programs that I create using Clarion also block the media keys (only when they have focus). This indicates that it's probably something in Clarion's Run-Time Library (RTL) that's causing the trouble. The keys will work if I click on a non-Clarion window before hitting the media key, but that's an undesirable hassle. The odd thing is that I have many colleagues with the same keyboard, and they have no problem. When I recently upgraded from Vista Professional to Win7 Ultimate, I noticed that various things "appear" differently. For example, with my old system, when I changed the volume or muted the volume bar visualization always appeared at the bottom right on the screen. Now it doesn't appear in certain programs, even when it works. This indicates an order of precedence for visual elements. I'm fairly certain a similar order of precedence exists for keyboard hooks. Depending on how the hooks are defined, and the order in which they're applied, it would seem that sometimes the IntelliType drivers don't see the media keystrokes. The Media keys probably behave differently than the rest of the "special" keys, because they are more of a standard across all keyboards, so perhaps are handled by a different driver hooking mechanism. Does anyone have any suggestions of how I might fix this problem? Is there some way to change the order of hooks? Delay the loading of the IntelliType driver? Thanks in advance!

    Read the article

  • Postfix mail forwarder

    - by Andrew
    Hello, I just bought a dedicated server and I'm trying to install a webserver on it. The server is Ubuntu 10.04. I installed ftp, nginx, php, mysql, bind and now I have to install mail server. For the mail server I'm using Postfix, because it's recomended on ubuntu. I installed Postfix with apt-get install postfix but mail() function from php wasn't working. After a little debug I found the way to solve this : I created an empty file /etc/postfix/main.cf and it worked good. I do have a mx record like this mail 5M IN A xxx.xxx.xxx.xxx example.com. 5M IN MX 1 mail.example.com. After that I wanted to forward all e-mails to my GMail address. So I googled for it and I found in the official docs Virtual Domain Host Forwarding I added these lines in main.cf virtual_alias_domains = example.com virtual_alias_maps = hash:/etc/postfix/virtual I created map file and I placed this line in it @example.com [email protected] I run in terminal postmap /etc/postfix/virtual postfix reload The result: I can send e-mail from php with mail() function, but when I send an email to [email protected] that e-mail isn't forwarded to my Gmail. How to solve this? -Andrew I also tried this but not working http://rackerhacker.com/2006/12/26/postfix-virtual-mailboxes-forwarding-externally/ It works now! But I don't know what the problem was. I just installed "Mail Server" from Tasksel and after that it worked fine. Can anyone tell me what Tasksel installed or that it changed ?

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >