Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 725/1981 | < Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >

  • SeLinux blocking connection to sshd on Ubuntu 9.10

    - by Barton Chittenden
    When I try to log on to my laptop, which runs Ubuntu 9.10, the server rejects my login attempts. Checking /var/log/auth.log, I see the following: Feb 14 12:41:16 tiger-laptop sshd[6798]: error: ssh_selinux_getctxbyname: Failed to get default SELinux security context for tiger I googled for this, and ran across the following: http://www.spinics.net/lists/fedora-.../msg13049.html Here's the part that I think relates to the problem that I'm having: Quote: What's wrong on my system? Why it's not possible to login even if selinux is in permissive mode? Any suggestions? I'd start by trying to figure out why sshd isn't running in sshd_t (it seems to be running in sysadm_t). Paul. selinux mailing list selinux@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mail...stinfo/selinux Yes, sshd is running in sysadm_t: ps axZ | grep sshd system_u:system_r:sysadm_t 3632 ? Ss 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pi ls -Z /usr/sbin/sshd system_ubject_r:sshd_exec_t /usr/sbin/sshd Don't know why it's not sshd_t. I didn't modified something. It's a standard installation of sles11 with the default reference policy from tresys. Maybe this code snippet from policy/modules/services/ssh.te is responsible for that: Allow ssh logins as sysadm_r:sysadm_t gen_tunable(ssh_sysadm_login, true) Any ideas? Do you have boolean init_upstart set to on? if not try setting it to on. I do not believe ssh_sysadm_login boolean works currently but i may be mistaken. -- Yeah, setting init_upstart to on did the trick! THANK A LOT! Do you know why this prevents the user from logging in through ssh even if selinux is set to permissive?? Ok, so the million dollar question is "where do I set 'init_upstart=1'"? It's not clear from context which configuration file needs to be edited, and I'm not at all familiar with SELinux configuration.

    Read the article

  • Messages stuck in SMTP queue - Exchange 2003

    - by Diav
    I need your help people ;-) I have a problem with messages coming into our Exchange Server and ones going out through it. Basically, the messages are stuck in the SMTP queue. A message will come into the server, I can see it listed under "Exchange System Manager", but if you list the properties of the message queue it says something like 00:10 SMTP Message queued for local delivery 00:10 SMTP Message delivered locally to [email protected] 00:10 SMTP Message scheduled to retry local delivery 00:11 SMTP Message delivered locally to [email protected] 00:11 SMTP Message scheduled to retry local delivery etc etc For outgoing message list looks like this: 10:55 SMTP: Message Submitted to Advanced Queuing 10:55 SMTP: Started Message Submission to Advanced Queue 10:55 SMTP: Message Submitted to Categorizer 10:55 SMTP: Message Categorized and Queued for Routing 10:55 SMTP: Message Routed nad Queued for Remote Delivery And the end - since then status didn't change, message is in queue, I am forcing connection from time to time but without an effect. I checked connection with smarthost (used telnet for that) and everything seems to work correctly, so the problem is probably on exchange side. I am using Exchange Server 2003 running on Small Business Server 2003. I don't have any antivirus installed on server. Remaining free space on each partition is over 3Gb, on partition with data bases - it is over 12Gb. All was working good and without problems since 2005, problems started in half of this june - messages started going out and being stuck almost randomly (I don't see a pattern yet, some are going out, some are not, some are going after several hours). I don't know what to do, what to check more, so please, any ideas? Best regards, D. edit Priv1.edb has 14,5GB and priv1.stm 2,6GB - together those files have more than 16GB - can it be the reason? If yes, then what? Indeed, I haven't thought that it can have something in common with my problem, but several users reported recent problems with Outlook Web Access - they can log in, they see the list of their mails, but they can't see the content of their emails. Although when they are connecting with Outlook 2003/2007 - there is no such problem, only with OWA there is. edit2 So,.. It works now, and I have to admit that I am not really sure what the problem was (hope it won't come back). What have I done: Cleaned up some mailboxes to reduce size of them Dismounted Information Store Defragmentated data base files ( I used eseutil: c:\program files\exchsrvr\bin eseutil /d g:\data base\Exchsrvr\MDBDATA\priv1.edb ) Mounted Information Store back ..and before I managed to do anything else - my queue started moving, elements which were kept there already for days - started moving and after few minutes everything was sent, both, outside and locally. But: priv1.edb is still big (13 884 203 008), priv1.stm as well (2 447 384 576), so this is probably not the issue of size of the file. And if not this, so what was that? And if that was issue of size of the file, then soon it will repeat - is there something I can do to avoid it ?

    Read the article

  • Permission denied when copying on a fileshare in Finder, but copying via command line works

    - by smokris
    I'm trying to copy files on a SMB fileshare. When I attempt to copy the files in Finder, I get the following error: The operation can’t be completed because you don’t have permission to access some of the items. Copying via Terminal.app (using a simple cp command) works just fine. Permissions on the folders (as seen from the computer attached to the fileshare) are as follows: Source: dr-xr-x--- 2 smokris staff 16384 Oct 13 10:55 . dr-xr-x---@ 61 smokris staff 16384 Oct 13 10:56 .. -r--r----- 1 smokris staff 53970 Oct 13 10:55 ._IMG_3823.JPG -r--r-----@ 1 smokris staff 3135600 Oct 13 10:55 IMG_3823.JPG Destination: drwxrwx--- 2 smokris staff 16384 Apr 9 10:17 . drwxrwx--- 3 smokris staff 16384 Apr 9 10:15 .. Any ideas?

    Read the article

  • .htaccess working on remote server but does not work on localhost. Getting 404 errors on localhost

    - by Afsheen Khosravian
    MY PROBLEM: When I visit localhost the site does not work. It shows some text from the site but it seems the server can not locate any other files. Here is a snippet of the errors from firebug: "NetworkError: 404 Not Found - localhost/css/popup.css" "NetworkError: 404 Not Found - localhost/css/style.css" "NetworkError: 404 Not Found - localhost/css/player.css" "NetworkError: 404 Not Found - localhost/css/ui-lightness/jquery-ui-1.8.11.custom.css" "NetworkError: 404 Not Found - localhost/js/jquery.js" It seems my server is looking for the files in the wrong places. For example, localhost/css/popup.css is actually located at localhost/app/webroot/css/popup.css. I have my site setup on a remote server with the same exact configurations and it works perfectly fine. I am just having this issue trying to run the site on my laptop at localhost. I edited my VirtualHosts file DocumentRoot and to /home/user/public_html/site.com/public/app/webroot/ and this reduces some errors but I feel that this is wrong and sort of hacking it since I didn't use these setting on my production server which works. The last note I want to make is that the website uses dynamic URLs. I dont know if that has anything to do with it. For example, on the production server the URLS are: site.com/#hello/12321. HERES WHAT I AM WORKING WITH: I have a LAMP server setup on my laptop which runs on Ubuntu 11.10. I have enabled mod_rewrite: sudo a2enmod rewrite Then I edited my Virtual Hosts file: <VirtualHost *:80> ServerName localhost DirectoryIndex index.php DocumentRoot /home/user/public_html/site.com/public <Directory /home/user/public_html/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> Then I restarted apache. My website is using cakePHP. This is the directory structure of the website: "/home/user/public_html/site.com/public" contains: index.php app cake plugins vendors These are my .htaccess files: /home/user/public_html/site.com/public/app/.htaccess: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ webroot/ [L] RewriteRule (.*) webroot/$1 [L] </IfModule> /home/user/public_html/site.com/public/app/webroot/.htaccess: <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] </IfModule>

    Read the article

  • Windows Media Sharing not 'always' being detected by PS3

    - by Ahmad
    I'm having a weird problem with Windows Media Sharing on Windows 7 .. I have the following hardware in my network: PC 1 --- My main PC --- runs Windows 7 Ultimate x64 PC 2 --- My backup PC --- runs Windows 7 Ultimate x32 PS3 PC 1 is my main PC which has all my data/media on it .. PC 2 is a backup PC I have, but I use it like once in 2 months .. It has nothing installed on it apart from some very very basic software ... Problem is, my PS3 always sees the media sharing service coming from PC 2, but it never sees the media sharing service coming from PC 1 initially .. Both PC 1 and PC 2 have the same media sharing configuration (All everything on all devices on all networks) ... But when I restart both PCs, the PS3 will only detect PC 2's media sharing service, not PC1 .... However here's the twist .. When PC 1 is restarted, and if I view my 'Network' on PC 2, I do see PC 1's Media Sharing Service, and I'm able to play from it too on PC 2 .. To get my PS3 to also see PC 1's media sharing service, I have to do either of the following 2 things: 1) Play something from PC 1's media sharing service on PC 2 ... The PS3 will then magically also detect PC 1's media sharing service .. 2) Go into the Services area on PC 1 and restart the 'Windows Media Player Network Sharing Service' ... After this, the PS3 also instantly starts to see PC 1's media sharing service .. Since my PS3 is like a month old and is properly detecting PC 1's media sharing service, I think the problem is somewhere in the configuration of PC 1's media sharing service ... Also, on PC 1 I have Norton Internet Security 2012 installed, but I've disabled it completely, and have also disabled Windows Firewall (from PC 1 only) .. Can someone shed some light onto this ?

    Read the article

  • LogMeIn Hamachi for Linux

    - by tlunter
    So far most of my work using LogMeIn Hamachi has been from either a Mac OS X or Windows system to Windows or a Linux Computer. Recently I purchased a mini computer and have been running Ubuntu Server on it, as my little server. I knew LogMeIn had a Linux client that is command line only, but I often do all my work via command line anyway, so that wasn't an issue. I added my user to the correct local file so that I could run the hamachi daemon without sudo, and was able to connect to LogMeIn's service. I decided to set up my Linux server as a git server as well, and set it up correctly. The thing is, the server is behind my schools firewall and I need to use hamachi to get around that. Since most of the time I was using either Mac or Windows, I never had an issue sshing onto any of my computers since LogMeIn is fully featured for these OSs. From Linux (Arch) though, it seems like the client cannot correctly route to the LogMeIn IPs. I know from Windows I can connect to the Linux computers, both of them. From Linux (Arch) though, I can't connect to my Mac, Windows, or Linux server. It keeps just dropping the connection. I was wondering if there was some configuration that I would need to make for this to work. I understand that it is most likely going to be a static configuration since I assume it has to do with the computer not understanding that 5.*.*.* actually refers to another IP:Port. Has anyone had any experience getting this to work?

    Read the article

  • Uninstalled programs cleaner

    - by ldigas
    Generally, I'm looking for a recommendation for an uninstalled programs cleaner ? What is it ? You have a program, you uninstall it, and it leaves a bunch of crap on your machine, starting from registry entries, to shortcuts that lead nowhere, up to a bunch of no more used files in Program Files and so ... Registry cleaners usually do some of this stuff, but 'twas wondering, what are good tools that have it all in one package ? I know such tools exist, 'cause in the past I've met with a few. Only I didn't need them then ;)

    Read the article

  • Java 7 update 6 installation fails on Windows 7 when Chrome is default browser

    - by ali1234
    I am configuring a brand new Lenovo U410 system with Windows 7 Home Premium for a user. I received the system direct from the shop. As part of the configuration I installed Java using the online installer. This worked correctly. Later, due to a mistake I made, I needed to restore the system to factory default. The factory default FORMATS C:\ and puts back (supposedly) the exact factory configuration. However, after doing this, I was no longer able to install Java successfully using the same method I used before. Now, whenever I attempt to use the online Java installer, the following happens. First of all, a window always appears "Welcome to Java", "Downloading Java Installer...". After short time this window disappears and then one of three things happens: The very first time I do this after doing a factory reset, I get a Windows error report, which contains this information: Application Name: JavaSetup7u5.exe Application Version: 7.0.50.6 Application Timestamp: 4feacd84 Fault Module Name: JavaIC.dll Fault Module Version: 9.9.9.9 Fault Module Timestamp: 4f2343d6 Exception Offset: 000052cb Exception Code: c0000417 Exception Data: 00000000 OS Version: 6.1.7600.2.0.0.768.3 Locale ID: 1033 Additional Information 1: 773c Additional Information 2: 773cd78cf06816f8246f359fa270f3bb Additional Information 3: f51a Additional Information 4: f51aaea7d22f36fa9e3a626b5a5cd1c3 2. Subsequent runs produce either this error message: "Error: Java(TM) installer - Downloaded file C:\Users\\AppData\Local\Temp\fx-runtime.exe is corrupt." or Nothing happens at all. I Believe this is a red herring. Running the installer again causes a different error because the files were downloaded and the installer crashed before it could clean up. This isn't the actual problem, as when this happens the installer deletes the downloaded files, and then when you run it for the third time, it downloads everything again and does the javaic.dll crash. I suspect the downloader is appending to the existing files or something, causing the corruption. I have tried all of the above as Administrator and as a normal user. I have tried reseting the system to factory defaults several times. I have tried downloading with Chrome and Internet Explorer 9. I have tried uninstalling all anti-virus software and disabling the windows firewall entirely. The only thing which makes a difference is running the installer in Windows XP compatibility mode, which allows the installation to complete. I know I can workaround this error by using the offline installer so please don't post that as an answer. I am looking for an explanation of the root cause. Additionally, if I use the offline installer, the updater does not work. The updater also does not work if I install in XP mode. The updater fails because it works by just downloading the newest online setup and running it. Also remember that the installers are digitally signed. The signitures verify correctly so there is no way in hell that this is caused by corrupted downloads. Some theories I have: The Java setup files on java.com actually changed in between the first successful install and my later attempts. Seems unlikely as none of the version numbers have changed. However, I have seen a couple of reports of this error which showed up in the past 24 hours. This looks like the most likely explanation right now: http://www.oracle.com/us/corporate/press/1735645 - Oracle released 7 update 6 two days ago. Careful inspection of the installers reveal that they are in fact attempting to download .6, not .5 as the download page claims. Not actually correct. Only the update tool tries to install 7u6. The online installer still tries 7u5. However, 7u6 being released two days ago is too much of a coincidence to ignore. Update: The 7u6 online installer is available from Oracle technetwork. It crashes in exactly the same way. The factory reset software uses GMT-8 and I am on GMT-1. As a result, after factory reset, any software which cares to check would think that the system was restored 7 hours in the future, due to Window's awful policy of storing local time in the system clock. This could be confusing a certificate check or similar. Update: I discovered that this does cause Windows Update to fail. The workaround, setting the clock back before starting factory reset, does not enable Java to install correctly. The factory reset image isn't really the same as what is installed in the main partition when you buy the system. Naughty Lenovo. The installer appears to crash while installing or displaying something to do with the Ask.com toolbar. That seems to be what javaic.dll does. Microsoft Tuesday was the 14th. Some update in that could be causing this. However, I'm factory reseting the machine every time, so unless the patches get slipstreamed into the recovery image, or there is some mechanism by which they get silently installed even if updates are disabled, then I don't see how this can be the cause. Major breakthrough: The default browser on Lenovo systems is Google Chrome. I noticed that the JavaIC.dll "sponsor check" actually does a check on your default browser in order to decide which sponsor ad to display. Normally that would get you the Ask toolbar on IE9. But that toolbar doesn't work on Chrome, and so the installer tries to display a different ad. The different ad is what causes the crash. Changing the default browser to IE9 allows the installer to run correctly. So this looks like a genuine bug in the sponsor ad code in the installer, caused by a combination of Google Chrome default browser and not being in the US. (Installer also checks your location using IP geolocation service and displays different ads based on that.)

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • Defragment an Exchange Volume

    - by IceMage
    The Scenario: I use a dedicated volume (RAID volume) to store all of my data for my Exchange 2007 server. Today, out of curiosity, I decided to check up on how fragmented the files on this data volume were. To my surprise, the answer is extremely. So, a three part question: First and Foremost, SHOULD I defragment this volume (after a full backup of course)? Be specific as to why not if I should not, or reasons I absolutely should if I should. Second, about how much time should I allow for during this maintenance period per gigabyte. The drives are all 7200 RPM SATA drives on a Hardware RAID 5 controller (Perc 5i/6i, can't remember), the files are extremely fragmented. (Over 5000 file fragments per gigabyte). Third, is there something wrong here? It seems to me like the drive shouldn't be this fragmented. Could something be configured incorrectly that could be causing this to happen?

    Read the article

  • How do I configure PHP5 and Apache2 on Ubuntu Server?

    - by rofls
    I'm trying to follow these instructions (under the Troubleshooting PHP 5 heading). I have PHP installed and when when I run a2enmod php5 it says "Module php5 already enabled". The problem is I created a file, test.php, that's just this: <?php phpinfo(); ?> and put it in /var/www, like the instructions tell me to, but running curl http://localhost/test.php produces an Apache made 404 that says it can't find that file. I have: ServerName localhost DocumentRoot /var/www in one of the sites-available in the /etc/apache2 directory. I should probably figure this out on my own, but the instructions say for troubleshooting do: "If the problem persists, check your PHP file authorisations (it should be readable at least by Ubuntu user "apache"), and check if the PHP code is correct. For instance, copy your PHP file, replace your whole PHP file content by "" (without the quotation marks): if you get the PHP test page in your web browser, then the problem is in your PHP code, not in Apache or PHP configuration nor in file permissions. If this doesn't work, then it is a problem of file authorisation, Apache or PHP configuration, cache not emptied, or Apache not running or not restarted." And I don't know where the PHP file authorisations are or how to do that.

    Read the article

  • Configure J2EE Agent with OpenAM behind Reverse Proxy

    - by Troy
    I have a reverse proxy with two SSL enabled NamedVirtualHosts on different ports. Both containers on each internal host is GF 2.1.1. Proxy configuration as follows: Proxy URL -> Internal URL https://apps.mydomain.com -> http://apps.internal.com https://secure.otherdomain.com:8080/ -> http://secure.internal.com I initially tried configuring the J2EE agent in OpenAM and the web app container to use the internal URLs (I appended /openam and /agentapp respectively). However, I received the following errors when trying to access a secured application such as https://apps.mydomain.com/webapp. java.lang.RuntimeException: Failed to load configuration: ApplicationSSOTokenProvider.getApplicationSSOToken(): Unable to get Application SSO Token A second attempt gives the following error: java.lang.NoClassDefFoundError: Could not initialize class com.sun.identity.agents.filter.AmFilterManager Along with these in the agent debug.out: ERROR: Failed to obtain auth service url from server: null://null:null ... SiteMonitor: Site URL http://secure.internal.com/openam/namingservice is not available. If I specify the server and agent urls using the proxy urls, then the agent appears to be working and I am redirected to the OpenAM login page. However, the goto in the URL is http://apps.mydomain.com/webapp instead of https://apps.mydomain.com/webapp (missing https). So after authentication, the redirect fails. Now I could possibly get by with mod_rewrite, but it feels hackish and I really want to know what's going on. Any ideas?

    Read the article

  • Best Way to Archive Digital Photos and Avoid Duplicate File Names

    - by user31575
    This problem pertains to archiving of digital pictures taken from multiple cameras. Answers here covered the general topic of the-mechanics-of-backups: How do you archive digital photos and videos ? I however face another problem. Having multiple cameras (canon) and multiple SD cards (mixed and matched at random), I have found that different SD cards have different photos with the same file name, i.e. two different photos each name IMG_3141.JPG. Additionally, for better or worse, I've backed up the files to multiple places and need to consolidate my backups. I want to eliminate duplicates, but not clobber files. The only way I can think of is to append the code (md5 or sha1) to the file name, i.e. IMG_3141.JPG becomes IMG_3141_KT229QZ31415926ASDF.JPG, then sorting them out Any better ways? (Note "open letter" address the 'duplicate file name' concern): http://photofocus.com/2010/09/13/an-open-letter-to-digital-camera-manufacturers-regarding-camera-file-naming/ )

    Read the article

  • failover cluster file replication

    - by user156144
    I have a Windows 2008 R2 failover cluster server. I am going to move one of our window services onto this new server. The service writes some trace information to a log file on the local harddrive. This will become a problem when it is moved to cluster server when cluster A become unavailable and cluster B takes over and now there are 2 places where I need to look for log files. Is there a way to make sure regardless of which cluster is on, I get one complete log file? I have been researching this and there is something called DFS replication but i was wondering if there is something better that works with failover cluster... I prefer not having to update my code. I can specify it to write log files to a different location by changing app.config file but no code change...

    Read the article

  • cisco 2900xl - SNMP - Get mac address of device connected to an interface

    - by ankit
    Hello all, Basically what i want to do is to find out what is the mac address of a device plugged in to an interface on the switch (FastEthernet0/1 for example) reading through the switch documentaion i found out that i can configure snmp trap on it to make it notify of any new mac address the switch detects by using the command snmp-server enable traps mac-notifiction but for some reason my switch does not support this feature. the only options i see are CORE_SWITCH(config)#snmp-server enable traps ? c2900 Enable SNMP c2900 traps cluster Enable Cluster traps config Enable SNMP config traps entity Enable SNMP entity traps hsrp Enable SNMP HSRP traps snmp Enable SNMP traps vlan-membership Enable VLAN Membership traps vtp Enable SNMP VTP traps <cr> so the other way would be for me to run a cronjon on my gateway to poll the switch periodically using snmp to get new mac addresses i have looked everywhere but cant seem to find the OID that would provide me this information. any help i can get would me very much appreciated ! here's the output from "show version" on my switch Cisco Internetwork Operating System Software IOS (tm) C2900XL Software (C2900XL-C3H2S-M), Version 12.0(5.4)WC(1), MAINTENANCE INTERIM SOFTWARE Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Tue 10-Jul-01 11:52 by devgoyal Image text-base: 0x00003000, data-base: 0x00333CD8 ROM: Bootstrap program is C2900XL boot loader CORE_SWITCH uptime is 1 hour, 24 minutes System returned to ROM by power-on System image file is "flash:c2900XL-c3h2s-mz.120-5.4.WC.1.bin" cisco WS-C2912-XL (PowerPC403GA) processor (revision 0x11) with 8192K/1024K bytes of memory. Processor board ID FAB0409X1WS, with hardware revision 0x01 Last reset from power-on Processor is running Enterprise Edition Software Cluster command switch capable Cluster member switch capable 12 FastEthernet/IEEE 802.3 interface(s) 32K bytes of flash-simulated non-volatile configuration memory. Base ethernet MAC Address: 00:01:42:D0:67:00 Motherboard assembly number: 73-3397-08 Power supply part number: 34-0834-01 Motherboard serial number: FAB040843G4 Power supply serial number: DAB05030HR8 Model revision number: A0 Motherboard revision number: C0 Model number: WS-C2912-XL-EN System serial number: FAB0409X1WS Configuration register is 0xF thanks, -ankit

    Read the article

  • getting PHP PDO flavors to work on Mac OS X

    - by Jason S
    I'm running OS X 10.5; it looks like it came with Apache and PHP installed (minus some minor configurations which I turned on per this page; I've used Apache before so I know the basics of how httpd.conf works). I've got a pre-existing script which uses PDO. I've got a MySQL database and can easily configure my script to access the database via PDO MySQL or PDO ODBC. The problem is, that even though I enabled the PDO MySQL and PDO ODBC extensions in php.ini, phpinfo() reports the only PDO drivers are sqlite2 and sqlite. I'm guessing the relevant extension .dll or .so files are not present? How do I get them? note: I'm using the built-in install for PHP. (see apple's page on enabling php, which doesn't say anything about configure or adding additional .so files)

    Read the article

  • Linux Mint reset display resolution from console

    - by wullxz
    I have a Linux Mint 13 Xfce in a VMware Workstation 8 VM and set the resolution from 800x600 to 1280x768 and now I get permanently logged out when I try to login. I knew how to get back to my old resolution back in the xorg.conf days but Linux Mint now uses xrandr which won't display any displays when running # xrandr because X is not running (of course not - I can't login over GUI). I know that there are configuration files in /etc/X11/Xsession.d/ because I configured a debian based thinclient's resolution in a file called /etc/X11/Xsession.d/91configure_display but that file doesn't exist in my Linux Mint VM. So, how do I reset my X screen resolution from console? Edit: I forgot to tell you that I can't change resolution in console: # xrandr -s 800x600 Can't open display This message appears every time I use xrandr or xrandr -s *resolution* Update: I tried what bWowk suggested: # export DISPLAY=:0.0 # xrandr -s 800x600 No protocol specified No protocol specified Can't open display :0.0 So, that doesn't work either. Isn't there a configuration file that is executed every time X starts? X is running btw - ps aux | grep X shows one process /usr/bin/X running.

    Read the article

  • How do you use environment variables, such as %CommonProgramFiles%, in the PATH and have them recognized by services.exe?

    - by Brad Knowles
    I'm trying to add C:\Program Files\Common Files\xxx\xxx to the system PATH environment variable by appending %CommonProgramFiles%\xxx\xxx to the existing path. After rebooting, I open a command prompt and check the PATH. It expands correctly. However, when using Process Explorer from Sysinternals to view the Environment variables on services.exe, it shows the unexpanded version. Coincidentally, the paths using %SystemRoot% expand and are recognized just fine. I've tried altering the PATH through the Environment Variables window from System Properties and through direct Registry manipulation, neither seems to work. Is it possible to use other environment variables, besides %SystemRoot% in PATH and have services.exe understand it?

    Read the article

  • More advanced 'Apple Automator' software?

    - by OrangeBox
    Is there any software similar to automator but more advanced? In our situation we have two files with the same name, one is a MOV the other XML. We want to use some of the metadata within the XML to rename both files. Then we want to re-arrange the contents of the XML file so that it is compatible with another piece of software we use (I think this is called mapping) Essentially some software that takes a bunch of variable from existing file and peforms file actions to them. I imagine this would be an easy task using applescript, but im wondering if there is a OSX application similar to Automator that can do the above? Questions are: Is there software that can do the above? Could Automator achieve this? What is the name of this process? If no such software exists, what would be the best kind of script to use? eg. Make an Apple Script, python script etc.

    Read the article

  • rsync to multiple destinations using same filelist?

    - by Dylan B.
    I'm wondering if it's possible for rsync to copy one directory to multiple remote destinations all in one go, or even in parallel. (not necessary, but would be useful.) Normally, something like the following would work just fine: $ rsync -Pav /junk user@host1:/backup $ rsync -Pav /junk user@host2:/backup $ rsync -Pav /junk user@host3:/backup And if that's the only option, I'll use that. However, /junk is located on a slow drive with quite a few files, and rebuilding the filelist of some ~12,000 files each time is agonizingly slow (~5 minutes) compared to the actual transfer/updating. Is it possible to do something like this, to accomplish the same thing: $ rsync -Pav /junk user@host1:/backup user@host2:/backup user@host3:/backup Thanks for looking!

    Read the article

  • Xcopy /exclude does not exclude some of the specified criteria

    - by Richard Z.
    Good afternoon. I want xcopy to copy all files meeting a certain criteria located in the C drive to a specific folder, except ones located in the directories specified in excl.txt. The exclusions only work partially - the files located in %systemroot%, %programfiles% and in each profile's appdata are still copied, even though those directories are listed in excl.txt. How do I make xcopy skip those directories, preferentially still using environment vars to specify the paths? My current syntax is: xcopy /s /c /d /h /i /r /y /g /f /EXCLUDE:excl.txt %systemdrive%\*.doc f:\test\ excl.txt currently contains the following: \%temp%\ \%userprofile%\appdata \%programfiles%\ \%programfiles(x86)%\ \%systemroot%\ \%programdata%\ appdata windows %programfiles% Thank you very much.

    Read the article

  • HP Procurve 2610 intervlan routing

    - by user19039
    Can anyone tell me why inter vlan routing is working for all vlans except my newly created vlan 4/ I have an hp procurve 2610. Any help would be appreciated. I have basically this 1 switch with all unmanaged switches attached to the core. We have a second 2610 on port 28 Running configuration: ; J9085A Configuration Editor; Created on release #R.11.25 hostname "Core_HP" interface 22 speed-duplex 100-full exit ip routing snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 1-12,17-22,26-27 ip address 192.168.4.6 255.255.255.0 tagged 25 no untagged 13-16,23-24,28 exit vlan 2 name "WAN" untagged 28 ip address 10.254.254.3 255.255.255.0 exit vlan 3 name "Wireless" untagged 13-16,24 ip address 192.168.7.6 255.255.255.0 ip helper-address 192.168.4.2 tagged 27 exit vlan 35 name "guest" untagged 23 tagged 24 exit vlan 4 name "esxi" untagged 25 ip address 10.10.1.1 255.255.248.0 exit ip route 192.168.5.0 255.255.255.0 10.254.254.1 ip route 192.168.6.0 255.255.255.0 10.254.254.1 ip route 0.0.0.0 0.0.0.0 192.168.4.10 show ip route IP Route Entries Destination Gateway VLAN Type Sub-Type M etric Dist. ------------------ --------------- ---- --------- ---------- - --------- ----- 0.0.0.0/0 192.168.4.10 1 static 1 1 10.10.0.0/21 esxi 4 connected 0 0 10.254.254.0/24 WAN 2 connected 0 0 127.0.0.0/8 reject static 0 250 127.0.0.1/32 lo0 connected 0 0 192.168.4.0/24 DEFAULT_VLAN 1 connected 0 0 192.168.5.0/24 10.254.254.1 2 static 1 1 192.168.6.0/24 10.254.254.1 2 static 1 1 192.168.7.0/24 Wireless 3 connected 0 0 show ip Internet (IP) Service IP Routing : Enabled Default TTL : 64 Arp Age : 20 VLAN | IP Config IP Address Subnet Mask Prox y ARP ------------ + ---------- --------------- --------------- ---- ----- DEFAULT_VLAN | Manual 192.168.4.6 255.255.255.0 No WAN | Manual 10.254.254.3 255.255.255.0 No Wireless | Manual 192.168.7.6 255.255.255.0 No esxi | Manual 10.10.1.1 255.255.248.0 No guest | Disabled

    Read the article

  • 2 nics. 2 Defaults Gateways

    - by andre.dias
    Here is my scenario: i have this server with 2 nics, each one with different IPs, connected to differents routers. Almost everything is configured whe way i need. Traffic coming from eth0 exits using eth0, traffic coming from eth1 exits using eth1. And there is a default gateway configured. $route: default IP 0.0.0.0 UG 0 0 0 eth0 With this configuration, the traffic generated in the server is going out using eth0 (lynx www.google.com for example). The problem is: the Internet link from eth0 went down today. The traffic coming from eth1 was ok...no problem. But the traffic generated in the server was a problem...the default gateway was out...no access do the Internet anymore (no more lynx www.google.com) So i added a new default gateway configuration, pointing to eth1. For 30 minutes i kept that way...2 default gateways, but just one was "working"...and everything was working just fine. But then i removed de eth0 gateway entry because, well, 2 default gateways is kind of weird. My question: is there any problem on keeping these 2 default gateways, one for each? So i don´t need to do nothing when one link go down again? $route: default IP1 0.0.0.0 UG 0 0 0 eth0 default IP2 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • rsync command deletion error "IO error encountered -- skipping file deletion"

    - by Jam88
    I use rsync command to take backup of files from one of my ubuntu server to another ubuntu machine. Backup server trigger a script that use rysnc command. Here is the command I use rsync -rltvh --partial --stats --exclude=.beagle/ --exclude=.* --delete-after root@live_server:/home/ /home/live_server_backup/home /tmp/logfile.log 2&1 live_server is ssh-able without password. So it works. Now problem is with --delete-after option After all file synced .At the end I can see deletion procedure skipped.logfile error is like IO error encountered -- skipping file deletion When i tried to find log there were some error while file sync rsync: send_files failed to open "/home/xyz/Desktop/PPT_session_1_context.pdf": Permission denied (13) So my understanding is as rsync could not read all the files from target for safety reason it is skipping the file deletion. Is there any way to make --delete-after work even if there is some permission error? I do not want to use force deletion as it will be dangerous in some situation.

    Read the article

  • Changed folder contents not updating in Finder (OS X 10.8)

    - by speedofmac
    I've been having this problem for a number of days now. When I update the contents of a folder using terminal commands, by decompressing archives, or by using "Save As", the affected folder often fails to reflect these changes. Sometimes it takes quite a while for any files to be shown, even if the total size of the folder is fairly small (< 10 MB). I'm running an '09 MacBook Pro 13", so it isn't the newest system, but it certainly has enough oomph to display a list of files in a folder. Does anyone know what might be causing this?

    Read the article

< Previous Page | 721 722 723 724 725 726 727 728 729 730 731 732  | Next Page >