Search Results

Search found 8411 results on 337 pages for 'django shell'.

Page 258/337 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • Why are my uWSGI processes dying immediately?

    - by orokusaki
    I'm using Supervisor and the uWSGI Emperor mode. When I set limit-as to 512 (MB), workers die instantly (respawn, die, respawn, die, every 3/4 of a second or so): [uwsgi] workers = 4 threads = 40 limit-as = 512 harakiri = 20 max-requests = 1600 ... non-performance/memory/processor-related settings ommitted But, if I change limit-as to: [uwsgi] workers = 4 threads = 40 limit-as = 1024 harakiri = 20 max-requests = 1600 ... non-performance/memory/processor-related settings ommitted and restart uwsgi, the problem is gone immediately. In order to put a sham in this, I've modified the setting back to 512, restarted again, and the problem is back immediately. Notes: My app is a simple Django app without much additional Python setup during start-up time.

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • Ubuntu whois package and request limits

    - by Sam Hammamy
    I'm writing a django app with a form that accepts an IP and does a whois lookup on the discovered domain names. I've found the Ubuntu package whois which I plan to call from a python subprocess, and read the stdout into a StringIO, then parse for things like Registrar, Name Servers, etc. My question is, it seems that there are many paid whois services, which means that there must be a reason why people don't just use this Ubuntu package. I'm wondering if there's a request limit on the number of requests from a single IP to the package's whois server? I will probably be making 250 domain lookups per IP or maybe more. Also, I've found that some domains aren't searchable: qmul.ac.uk is searchable kat.ph is not searchable ahram.org.eg is not searchable Any particular reason for that?

    Read the article

  • How to install python modules for specific python version

    - by Zayatzz
    I needed to install UCS2 python next to UCS4 python. So I went to comp.lang.python and asked them about it. Probably not the best place to ask it, but they answered https://groups.google.com/forum/?fromgroups#!topic/comp.lang.python/bGuAfqa76W8 and now i have brand new python 2.7.3 ucs2 installed in /opt/bin/python What I need now is - how can I install all other python modules that I have installed for that python version also. Basically stuff like PIL and postgresql and mod_wsgi - basically everything needed to run Django for that python version. Is this the right the place to ask for it?

    Read the article

  • cron.daily not running at the time it should?

    - by Mariano Martinez Peck
    My /etc/cron.daily scripts seem to be executing far later from what I understand they should. I am in Ubuntu and anacron is installed. If I do a sudo cat /var/log/syslog | grep cron I get something like: Aug 23 01:17:01 mymachine CRON[25171]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 02:17:01 mymachine CRON[25588]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 03:17:01 mymachine CRON[26026]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 03:25:01 mymachine CRON[30320]: (root) CMD (test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )) Aug 23 04:17:01 mymachine CRON[26363]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 05:17:01 mymachine CRON[26770]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 06:17:01 mymachine CRON[27168]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 07:17:01 mymachine CRON[27547]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Aug 23 07:30:01 mymachine CRON[2249]: (root) CMD (start -q anacron || :) Aug 23 07:30:02 mymachine anacron[2252]: Anacron 2.3 started on 2014-08-23 Aug 23 07:30:02 mymachine anacron[2252]: Will run job `cron.daily' in 5 min. Aug 23 07:30:02 mymachine anacron[2252]: Jobs will be executed sequentially Aug 23 07:35:02 mymachine anacron[2252]: Job `cron.daily' started As you can see, at 3:25 it tried to do something. But the cron.daily execution started really at 7:35. My /etc/crontab is: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 3 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) # From what I understand, daily scripts are indeed for 3:25. My /etc/anacrontab is: # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root LOGNAME=root # These replace cron's entries 1 5 cron.daily run-parts --report /etc/cron.daily 7 10 cron.weekly run-parts --report /etc/cron.weekly @monthly 15 cron.monthly run-parts --report /etc/cron.monthly So...does someone know why my cron started to do something at 3:25 but then really start the jobs at 7:35? Also..as you can see in the log, hourly jobs are being executed at correct time: hour and 17 minutes, which is exactly what I have in /etc/crontab Finally, from the logs, it seems my daily jobs are being actually run by anacron rather than cron? So cron finds nothing to run (at 3:25) and then anacron runs the jobs at 7:35? If true, how can I fix this? Thanks in advance,

    Read the article

  • How can I make Amazon SES the default method of sending mail from my server?

    - by Jake
    I'd like to start using Amazon SES for all emails from our server. We have a few freelance designers with PHP hosting, some Django/Python web apps and also some system utilities which send emails. So I'd like to have PHP's mail function, the command line mail command and our python apps all be able to use it, preferably without having to set them all up in their own way. I think what I need is to have something like Postfix running on localhost and using SES for it's delivery but I don't know how to do that. Amazon's docs state I need to setup my mail transfer agent (MTA) so that it invokes the ses-send-email.pl script. I have the script but I'm not sure how to achieve this. Am I on the right track? If so, how can I configure Postfix to use that script?

    Read the article

  • osx 10.6.3 how does apache config work?

    - by w-
    hi, just got a macbook pro 15" so i'm unfamiliar with how the filesystem is laid out. i noticed when in my filesystem that i've got a few paths specifying httpd.conf /etc/apache2/httpd.conf /opt/local/apache2/conf/httpd.conf /private/etc/apache2/httpd.conf the confs are different in lots of ways-- e.g. user, group, server_root, modules that are loaded, etc. the apache2 folders themselves also greatly differ. it seems that the one getting used is either /etc/apache2/httpd.conf or /private/etc/apache2/httpd.conf i'm wondering if i might have messed up my system after installing some packages (php5, django, etc) via macports and maybe ended up with 2 apache2 instances. my questions are hence: - which httpd.conf is the one being used ? - what are the other files for? thanks

    Read the article

  • NTFS Permissions - Access Denied even though Explicit Allow and no Deny

    - by chris613
    I'm hoping someone can help me with this NTFS permissions problem. The short version is that I can't write a new file in F:\SomeDir even though I seem to be granted full permissions via both the "Domain Admins" group and a second unprivileged group. The "Effective Permissions" tab in the explorer permissions UI shows that I have full control, and there are no "Deny"s anywhere in the ACL or anything else that looks unusual. I am logged into the machine over RDP and accessing the disk directly, not through a share. F:\SomeDir>set U USERDNSDOMAIN=THEOFFICE.LOCAL USERDOMAIN=THEOFFICE USERNAME=thisisme USERPROFILE=C:\Users\thisisme F:\SomeDir>icacls . . BUILTIN\Administrators:(I)(F) CREATOR OWNER:(I)(OI)(CI)(IO)(F) THEOFFICE\Domain Admins:(I)(OI)(CI)(F) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F) BUILTIN\Administrators:(I)(OI)(CI)(IO)(F) BUILTIN\Users:(I)(OI)(CI)(RX) Successfully processed 1 files; Failed processing 0 files F:\SomeDir>net group /domain "Domain Admins" The request will be processed at a domain controller for domain THEOFFICE.local. Group name Domain Admins Comment Designated administrators of the domain Members ------------------------------------------------------------------------------- Administrator thatguy thisisme The command completed successfully. F:\SomeDir>echo "whyUNoCreateFile?" > whyUNoCreateFile.txt Access is denied. I searched for answers and came across similar problems that lead to UAC (ex. Why does removing the EVERYONE group prevent domain admins from accessing a drive? ). I can't turn off UAC at the moment, so I try a "regular" group that I'm also part of. This group has no special rights assignments and is not part of any administrative groups. Still no dice: [***** This one command executed in an elevated shell *****] F:\SomeDir>icacls . /grant THEOFFICE\iteveryone:(OI)(CI)F processed file: . Successfully processed 1 files; Failed processing 0 files F:\SomeDir>net group /domain "iteveryone" The request will be processed at a domain controller for domain THEOFFICE.local. Group name ITeveryone Comment Members ------------------------------------------------------------------------------- Administrator thatguy thisisme otherguy someitguy The command completed successfully. F:\ScanningVMsForIBM>echo y > u Access is denied. As you can see, using a "regular" group didn't help. I have logged out and back in to the server to ensure my login token is up to date, and at any rate I belonged to these groups before the server was created. If I grant explicit permission to myself, it does allow me to write files: [***** This one command executed in an elevated shell *****] F:\SomeDir>icacls . /grant THEOFFICE\thisisme:(OI)(CI)F processed file: . Successfully processed 1 files; Failed processing 0 files F:\SomeDir>echo y > u F:\SomeDir>type u y My requirement is for the "Domain Admins" group to have Full Control, or if that's not possible without disabling UAC, then a second group will do, but I can't get either to work. I'm really stumped. Can someone please point out what I could be overlooking?

    Read the article

  • Limit which processes a user can restart with supervisor?

    - by dvcolgan
    I have used supervisor to manage a Gunicorn process running a Django site, though this question could pertain to anything being managed by supervisor. Previously I was the only person managing and using our server, and supervisor just ran as root and I would use sudo to run supervisorctl restart myapp when needed. Now our server has to support multiple users working on different sites, and each project needs to be able to restart their own gunicorn processes without being able to restart other users' processes. I followed this blog post: http://drumcoder.co.uk/blog/2010/nov/24/running-supervisorctl-non-root/ and was able to allow non-root users to use supervisorctl, but now anyone can restart anyone else's processes. From the looks of it, supervisor doesn't have a way of doing per-user access control. Anyone have any ideas on how to allow users to restart only their own processes without root?

    Read the article

  • How do I get started with Chef?

    - by Brad Wright
    The chef documentation is pretty bad. And Google isn't helping me. Can anyone point me at a decent article or something that would help me get started? My specific issues are: How do I get a client to read my configuration? chef-solo seems like the best start (I don't want to run an OpenID server or Merb) How do I configure Apache to serve Django? I already know how to do this via regular server configuration, but I figure an example Chef recipe would be a good start;

    Read the article

  • I'd like to rebuild my web server without web management software; what knowledge, skills, and tools will I require? [closed]

    - by Joe Zeng
    I've been using Webmin for my web server that runs my personal website and a host of other websites for a while now, and I feel like I should be able to manage my web server more directly, because I haven't even touched the Webmin for the past year or so and I feel like maybe it has too much functionality that I have to click through the next time I want to access it or create a new subdomain or database on my site. I want to try something lighter and more wholly manageable, now that I'm more comfortable with using ssh and command-line tools. I've decided that I'm going to try using Django as a framework, but obviously that's only part of the picture. What sort of knowledge will I require?

    Read the article

  • mysql replication 1x master, 1x slave

    - by clarkk
    I have just setup one master and one slave server, but its not working.. On my website I connect to the slave server and I insert some rows, but they do not appear on the master and vice versa.. What is wrong? This is what I did: Master: -> /etc/mysql/my.cnf [mysqld] log-bin = mysql-master-bin server-id=1 # bind-address = 127.0.0.1 binlog-do-db = test_db Slave: -> /etc/mysql/my.cnf [mysqld] log-bin = mysql-slave-bin server-id=2 # bind-address = 127.0.0.1 replicate-do-db = test_db Slave: terminal 0 > mysql> STOP SLAVE; // and drop tables Master: terminal 1 > mysql> CREATE USER 'repl_slave'@'slave_ip' IDENTIFIED BY 'repl_pass'; mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl_slave'@'slave_ip'; mysql> FLUSH PRIVILEGES; mysql> FLUSH TABLES WITH READ LOCK; -- leave terminal open terminal 2 > shell> mysqldump -u root -pPASSWORD test_db --lock-all-tables > dump.sql mysql> SHOW MASTER STATUS; Slave: terminal 3 > shell> mysql -u root -pPASSWORD test_db < dump.sql terminal 0 > mysql> CHANGE MASTER TO mysql> MASTER_HOST='master_ip', mysql> MASTER_USER='repl_slave', mysql> MASTER_PASSWORD='repl_pass', mysql> MASTER_PORT=3306, mysql> MASTER_LOG_FILE='mysql-master-bin.000003', // terminal 2 > SHOW MASTER STATUS mysql> MASTER_LOG_POS=4, // terminal 2 > SHOW MASTER STATUS mysql> MASTER_CONNECT_RETRY=10; mysql> START SLAVE; mysql> SHOW SLAVE STATUS; Here is the slave status: Array ( [Slave_IO_State] => Waiting for master to send event [Master_Host] => xx.xx.xx.xx [Master_User] => repl_slave [Master_Port] => 3306 [Connect_Retry] => 10 [Master_Log_File] => mysql-master-bin.000003 [Read_Master_Log_Pos] => 106 [Relay_Log_File] => mysqld-relay-bin.000002 [Relay_Log_Pos] => 258 [Relay_Master_Log_File] => mysql-master-bin.000003 [Slave_IO_Running] => Yes [Slave_SQL_Running] => Yes [Replicate_Do_DB] => test_db [Replicate_Ignore_DB] => [Replicate_Do_Table] => [Replicate_Ignore_Table] => [Replicate_Wild_Do_Table] => [Replicate_Wild_Ignore_Table] => [Last_Errno] => 0 [Last_Error] => [Skip_Counter] => 0 [Exec_Master_Log_Pos] => 106 [Relay_Log_Space] => 414 [Until_Condition] => None [Until_Log_File] => [Until_Log_Pos] => 0 [Master_SSL_Allowed] => No [Master_SSL_CA_File] => [Master_SSL_CA_Path] => [Master_SSL_Cert] => [Master_SSL_Cipher] => [Master_SSL_Key] => [Seconds_Behind_Master] => 0 [Master_SSL_Verify_Server_Cert] => No [Last_IO_Errno] => 0 [Last_IO_Error] => [Last_SQL_Errno] => 0 [Last_SQL_Error] => )

    Read the article

  • Apache whitelist a single location, but require basic auth for everything else

    - by Chris Lawlor
    I'm sure this is simple, but Google is not my friend this morning. The goal is: /public... is openly accessible everything else (including /) requires basic auth. This is a WSGI app, with a single WSGI script (it's a django site, if that matters..) I have this: <Location /public> Order deny,allow Allow from all </Location> <Directory /> AuthType Basic AuthName "My Test Server" AuthUserFile /path/to/.htpasswd Require valid-user </Directory> With this configuration, basic auth works fine, but the Location directive is totally ignored. I'm not surprised, as according to this (see How the Sections are Merged), the Directory directive is processed first. I'm sure I'm missing something, but since Directory applies to a filesystem location, and I really only have the one Directory at /, and it's a Location that I wish to allow access to, but Directory always overrides Location... EDIT I'm using Apache 2.2, which doesn't support AuthType None.

    Read the article

  • Screen startup apps

    - by stillinbeta
    I know that most people don't bother with things like screen anymore, but I happen to really like it, even in this GUI day and age. I still do most of my development from a BASH prompt, so it's extremely useful to me. What I'm wondering is what the easiest way is to start an instance of screen (stored in a shell script or .screenrc or somewhere else) so that it starts up with set commands already running in set windows. For example, I use a django test server, so I'd like one window to come up running "python manage.py runserver" and another blank, waiting for commands. The man page is wholly indecipherable. These old unix utilities can do quite nearly everything, so I'm sure this is possible, but I can't for the life of me figure out how. I

    Read the article

  • fatal error occured while trying to sysprep the machine windows 8.1

    - by Mick
    I try do sysprep in Windows 8.1 I have create unattend.xml <settings pass="oobeSystem"> <component name="Microsoft-Windows-International-Core" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <InputLocale>en-US</InputLocale> <SystemLocale>en-US</SystemLocale> <UILanguage>en-US</UILanguage> <UILanguageFallback>en-US</UILanguageFallback> <UserLocale>en-US</UserLocale> </component> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <OEMInformation> <Manufacturer>XYZ</Manufacturer> <SupportURL>http://www.XYZ.com</SupportURL> </OEMInformation> <OOBE> <HideEULAPage>true</HideEULAPage> <NetworkLocation>Work</NetworkLocation> <ProtectYourPC>1</ProtectYourPC> </OOBE> <UserAccounts> <AdministratorPassword> <Value>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</Value> <PlainText>false</PlainText> </AdministratorPassword> <LocalAccounts> <LocalAccount wcm:action="add"> <Password> <Value>XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX</Value> <PlainText>false</PlainText> </Password> <Description>Admin</Description> <DisplayName>Admin</DisplayName> <Group>Administrators</Group> <Name>Admin</Name> </LocalAccount> </LocalAccounts> </UserAccounts> <WindowsFeatures> <ShowWindowsMediaPlayer>false</ShowWindowsMediaPlayer> <ShowMediaCenter>false</ShowMediaCenter> </WindowsFeatures> <RegisteredOrganization>XXXXXXXXXXXXXXXXXXXXX</RegisteredOrganization> <RegisteredOwner>XXXXXXXXXXXXXXXXXXXXXXX</RegisteredOwner> <TimeZone>Central European Standard Time</TimeZone> <ShowWindowsLive>false</ShowWindowsLive> </component> </settings> <settings pass="specialize"> <component name="Microsoft-Windows-Shell-Setup" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <RegisteredOrganization>XXXXXXXXXXXXXXXXXXXXXXXXX</RegisteredOrganization> <RegisteredOwner>XXXXXXXXXXXXXXXXXXXXXXXXXX</RegisteredOwner> <ProductKey>XXXXXXXXXXXXXXXXXXXXXXXXXXXXX</ProductKey> </component> </settings> And then I run sysprep.exe /oobe /generalize /shutdown I see this error: fatal error occurred while trying to sysprep the machine

    Read the article

  • wsgi - narrow user permissions.

    - by Tomasz Wysocki
    I have following Apache configuration and my application is working fine: <VirtualHost *:80> ServerName ig-test.example.com WSGIScriptAlias / /home/ig-test/src/repository/django.wsgi WSGIDaemonProcess ig-test user=ig-test </VirtualHost> But I want to protect my files from other users, so I do: chown ig-test /home/ig-test/ -R chmod og-rwx /home/ig-test/ -R And application stops working: (13)Permission denied: /home/ig-test/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable Is it possible to achieve what i'm doing with wsgi? If I have to give read permissions to some files it will be fine. But there are files I have to protect (like file with DB configuration or business logic of application).

    Read the article

  • Launchd item no longer firing in Snow Leopard

    - by ridogi
    A launchd item that was working in 10.5 is no longer working after my upgrade to 10.6. I am running 10.6.2 and I have recreated the launchd item and given it a new name and that one doesn't run either. I have found a link of people with the same problem on google groups but none of the advice in that link helps. My launchd item is not listed in /private/var/db/launchd.db/com.apple.launchd/overrides.plist or in any of the overrides.plist files in the subdirectories of /private/var/db/launchd.db/ I have also tried to set this up as both a user agent and a user daemon. My launchd item simply runs a shell script, which I have no problem launching manually. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>com.eric.tmnotify.launchd</string> <key>ProgramArguments</key> <array> <string>/<path_to>/tmnotify.sh</string> </array> <key>StartInterval</key> <integer>3600</integer> </dict> </plist> I have tried to load it by overriding the disabled key (even though it is not disabled in any of the overrides.plist files) with both: sudo launchctl load -F /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist sudo launchctl load -w /Users/eric/Library/LaunchAgents/com.eric.tmnotify.launchd.plist and after running either of them I can see that it is running by using sudo launchctl list but the shell script never fires. Edit: I have also put this in the formerly blank file at /private/var/db/launchd.db/com.apple.launchd.peruser.501/overrides.plist : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> </dict> </plist> I also tried inserting this alphabetically: <key>com.eric.tmnotify.launchd</key> <dict> <key>Disabled</key> <false/> </dict> into the file /private/var/db/launchd.db/com.apple.launchd/overrides.plist but still no dice.

    Read the article

  • Apache+FastCGI Timeout Problem

    - by Sadjad Fouladi
    Hi all. I've recently installed mod_fastcgi and Apache 2.2. I've a simple cgi script as below (test.fcgi): #!/bin/sh echo sadjad But when I invoke "mysite.com/test.fcgi" I see "Internal Server Error" message after a short period of time. The error.log file shows this error message: [Tue Jan 31 22:23:57 2006] [warn] FastCGI: (dynamic) server "~/public_html/oaduluth/dispatch.fcgi" has failed to remain running for 30 seconds given 3 attempts, its restart interval has been backed off to 600 seconds This is my .htaccess file: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ django.fcgi/$1 [QSA,L] I'm very confused, please help me! [Sorry for my poor English!]

    Read the article

  • Cannot open simple script application on mac

    - by streetpc
    Mac OS X 10.6 I created a very simple app, which is only a wrapper of a shell script (so that I can select this script in application selectors, like startup apps). I try to launch it and yesterday it worked, but today I changed the executable script's content and name (with something that perfeclty works in a shell script launched in the Terminal) and it will only display a Finder-iconed dialog saying Cannot open the application because it is not supported on this kind of Mac. I restored the previous script (content/name) but I still get the error! Same when re-bundling the app from scratch, or completely changing the bundle identifier… If I try to open it in the Terminal using open My.app, I get The application cannot be opened because it has an incorrect executable format. But when I executes directly the Contents/MacOS/Script, it allways works (iwth both contents). Also, it is displayed with correct icon and meta-information in the Finder (so I guess the Info.plist is understood). The app's file tree is: Contents/ Info.plist MacOS/ Script (executable bit set, works when launched directly) PkgInfo Resources/ AppIcon.icns Here is the Info.plist content: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>CFBundleExecutable</key> <string>Script</string> <key>CFBundleIconFile</key> <string>AppIcon</string> <key>CFBundleIdentifier</key> <string>asdf.ScriptApp</string> <key>CFBundleInfoDictionaryVersion</key> <string>6.0</string> <key>CFBundleName</key> <string>My script</string> <key>CFBundlePackageType</key> <string>APPL</string> <key>CFBundleShortVersionString</key> <string>1.0</string> <key>CFBundleSignature</key> <string>????</string> <key>CFBundleVersion</key> <string>1</string> <key>LSMinimumSystemVersion</key> <string>10.4</string> </dict> </plist> And the PkgInfo file only contains APPL????. I tested the Script with a simple echo "ok" and echo "ok" >/tmp/test (plus #!/bin/sh header). So my questions are: Is there some kind of validity caching for applications ? based on what ? how do I flush it ? Where does this message come from ? I tried to google it but all I get is a page talking about 32/64 bits Java…

    Read the article

  • First time setting up a MySQL database.

    - by Wilduck
    In trying to learn how to work with the LAMP stack, I've hit a wall with MySQL. I can't seem to find a good reference for the first time setup of MySQL to be used with Apache and python. So, my question is four-fold: 1) Under what circumstances should I create my first database. That is, what user do I use (Apache's http user? root?) 2)How do permissions work? 3) Do I have to do anything on the MySQL side to make MySQL talk to Apache, or MySQL to talk to Python/Django? 4) Is there a good resource online that describes setting all of this up? I've found a bunch for using a database once it's in place, but none for the initial setup? Notes: I'm trying to run my LAMP stack on a dedicated little box for testing/learning purposes only, so I don't have access to any DBA that could help me, as much as I'd like one.

    Read the article

  • Installing httpssl module on a running NGINX server

    - by Rob
    Hi, New to NGINX, we inherited a project that runs Django/FCGI/NGINX on a hosted RHEL box. A requirement has come in that the site now needs to have ssl enabled. Client was pretty sure the person who had built the site had made it so they could use ssl. I backed up the conf file, added the server block for the ssl instance and tried to reload. Reload failed because it didn't recognize the ssl in this line: ssl on; Not an NGINX expert, but the David Caruso in me tells me that the server (sunglasses on) is not secure. I know that you need to configure NGINX at install with this module. If this didn't happen, how hard/risky is it to reconfigure a running nginx box with this module given that we didn't configure it in the first place.

    Read the article

  • phpbb behind a reverse proxy

    - by asciitaxi
    Hi, i've got a django app running on apache behind an nginx reverse proxy. Nginx takes requests on port 80 and forwards them to apache on 127.0.0.1:81. This works fine. Now I want to run phpbb on apache under /forums. My problem is that when phpbb does a redirect, it seems to redirect to the internal apache port, rather than port 80. So, for instance when I first go to http://my-dev-server/forums to configure php bb, it immediately redirects to http://127.0.0.1:81/forums/install/index.php. Is there something I need to do in nginx/apache/phpbb config to get it to redirect to the external port? Thanks very much!

    Read the article

  • Server memory issues, and expected level of service from hosting company

    - by Greg
    I'm involved in maintaining an Ubuntu VPS which runs our django websites (nginx/apache/mod_wsgi) and we've been having some memory spikes which have either caused the database to die, or induced kernel panic when the memory management system can't find any killable processes. I'm working on fixing the memory spikes, but I'm wondering whether there's anything I can do to better deal with the problem if it occurs again. Are there any tools I could use to detect the memory spikes and then, say, kill the offending process and email the server admin to fix it up? Killing off one website so that the server can remain operational is certainly preferable to the whole thing falling over. Also, we were charged $600 for after-hours service because we had to get the hosting company to restart the server - is this standard practice among hosting companies? Another provider I work with provides a panel with which I can stop and start the server myself, and given that a restart was all that was needed, $600 seems mightily excessive. (That's NZD, it's around $445 USD)

    Read the article

  • Best practices for setting lm-factor in Squid refresh patterns

    - by Mpentecost
    I am running a Squid (3.1) cache in front of Django. The content of the site does not change very often, so Squid gives our backend much needed breathing room. Currently, this is the refresh pattern that we are using to cache the content: refresh_pattern . 60 100% 60 We basically want to cache everything for at least an hour (and only an hour) before Squid then re-validates the content. My question is on the "100%" parameter, which sets the lm-factor. I'm not sure if setting that to 100% is doing what we want it to. The assumption was that by setting it to 100%, it would ensure that objects stay in the cache for the max cache time. Is this an incorrect assumption? What are the best practices that one should follow when setting up a refresh pattern like this?

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >