Search Results

Search found 6497 results on 260 pages for 'logon scripts'.

Page 152/260 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Did I find a bug in PHP's `crypt()`?

    - by Nathan Long
    I think I may have found a bug in PHP's crypt() function under Windows. However: I recognize that it's probably my fault. PHP is used by millions and worked on by thousands; my code is used by tens and worked on by me. (This argument is best explained on Coding Horror.) So I'm asking for help: show me my fault. I've been trying to find it for a few days now, with no luck. The setup I'm using a Windows server installation with Apache 2.2.14 (Win32) and PHP 5.3.2. My development box runs Windows XP Professional; the 'production' server (this is an intranet setup) runs Windows Storage Server 2003. The problem happens on both. I don't see anything in php.ini related to crypt(), but will happily answer questions about my config. The problem Several scripts in my PHP app occasionally hang: the page sits there on 'waiting for localhost' and never finishes. Each of these scripts uses crypt to hash a user's password before storing it in the database, or, in the case of the login page, to hash the entered password before comparing it to the version stored in the database. Since the login page is the simplest, I focused on it for testing. I repeatedly logged in, and found that it would hang maybe 4 out of 10 times. As an experiment, I changed the login page to use the plain text password and changed my password in the database to its plain text version. The page stopped hanging. I saw that PHP's latest version lists this bugfix: Fixed bug #51059 (crypt crashes when invalid salt are [sic] given). So I created a very simple test script, as follows, using the same salt given in an official example: $foo = crypt('rasmuslerdorf','r1'); echo $foo; This page, too, will hang, if I reload it like crazy. I only see it hanging in Chrome, but regardless of browser, the effect on Apache is the same. Effect on Apache When these pages hang, Apache's server-status page (which I explained here, regarding a different problem) increments the number of requests being processed and decrements the number of idle workers. The requests being processed almost all have a status of 'Sending Reply,' though sometimes for a moment they will show either 'Reading request' or 'keepalive (read).' Eventually, Apache may crash. When it does, the Windows crash report looks like this: szAppName: httpd.exe szAppVer: 2.2.14.0 szModName: php5ts.dll szModVer: 5.3.1.0 // OK, this report was before I upgraded to PHP 5.3.2, // but that didn't fix it offset: 00a2615 Is it my fault? I'm tempted to file a bug report to PHP on this. The argument against it is, as stated above, that bugs are nearly always my fault. However, my argument in favor of 'it's PHP's fault' is: I'm using Windows, whereas most servers use Linux (I don't get to choose this), so the chances are greater that I've found an edge case There was recently a bug with crypt(), so maybe it still has issues I have made the simplest test case I can, and I still have the problem Can anyone duplicate this? Can you suggest where I've gone wrong? Should I file the bug after all? Thanks in advance for any help you may give.

    Read the article

  • How do I secure Sql Server 2008 R2

    - by Mark Tait
    I have both a dedicated and a VPS (from Fasthosts) virtual server - the web sites/applications I run on these, access Sql Server stored on the same web server. Until now, I have logged onto Sql Server on both the deidicated and VPS server, from Sql Server Management Studio - until I noticed in my server application logs, multiple attempts to logon to Sql Server using the 'sa' username, but failed password. So someone/bot is trying hard (repeatedly every couple of hours, for approx 20 attempts during each instance) to log on... so obviously I have to lock down access to Sql Sever remotely. What I have done is gone into Configuration Manager, and in Sql Server Network Configuration - Protocols for Sql2008 and also in Sql Native Client 10.0 Configuration - Client Protocols - I have diabled Named Pipes, TCP/IP (and VIA by default). I have left Shared Memory enabled. I also disabled in Sql Server Services, the Sql Server Browser. Now the only way I can manage the databases on these servers, is by logging on to them via Remote Desktop. Can anyone confirm if this is the correct way of stopping anyone maliciously logging on to Sql Server? (I'm not a DBA or security expert - and there are hundreds of articles advising all different ways - but I was hoping for the experts here to confirm, or otherwise, if what I've done is correct) Thank you, Mark

    Read the article

  • System information shown when booting Debian

    - by WebDevHobo
    When booting Debian, you'll see it printing a lot of information about the system variables and such. I don't really need to see all that, so I'd like to modify some scripts to make sure that on boot, it just does what it has to do, without printing it on the screen. Just something I fancy. Offcourse, still seeing errors would be nice. But that long slur of text, I could do without. I've tried looking it up, but I can't find documentation on this specific thing anywhere.

    Read the article

  • Script for run script

    - by user31568
    Hello everyone. There is script: Dim WSHShell, WinDir, Value, wshProcEnv, fso, Spath Set WSHShell = CreateObject("WScript.Shell") Dim objFSO, objFileCopy Dim strFilePath, strDestination Const OverwriteExisting = True Set objFSO = CreateObject("Scripting.FileSystemObject") Set windir = objFSO.getspecialfolder(0) objFSO.CopyFile "\dv.rt.ru\SYSVOL\DV.RT.RU\scripts\shutdown.vbs", windir&"\", OverwriteExisting strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\" _ & strComputer & "\root\cimv2") JobID = "1" Set colScheduledJobs = objWMIService.ExecQuery _ ("Select * from Win32_ScheduledJob") For Each objJob in colScheduledJobs objJob.Delete Next Set objNewJob = objWMIService.Get("Win32_ScheduledJob") errJobCreate = objNewJob.Create _ (windir & "\shutdown.vbs", "**093000.000000+660", _ True, 1 OR 2 OR 4 OR 8 OR 16 OR 32 OR 64, ,True, JobId) How make that shutdown.vbs run not at 9:00 once, but run for 9:00 to 12:00 Thanks

    Read the article

  • Outgoing mail from linux not being delivered

    - by Jason
    I can't seem to send mail through my php scripts or through the linux console on my Centos 5.5 LAMP server, when the email is addressed to go to a domain that is hosted by my box. I think it is something to do with the email routing internally, or the DNS servers that the box uses not reporting the correct MX records. Basically my box doesn't host any mail, it's all hosted on google apps. My name servers are hosted by a 3rd party provider and I am using webmin. Webmin doesn't recognise the settings on the 3rd party provider. I'm unsure how to fix this. Previously when I had this problem on a cpanel server, I would edit the remotedomains and localdomains files, moving domains from one file to another and it would fix the problem. What information do I need to provide for anyone to work out what the issue is? Thanks

    Read the article

  • Terminal Server 2008 Login: Access Denied

    - by user1236435
    When I try to RDP into a Server 2008 Terminal Server, I get a message that says "Access Denied" and an OK button. I setup the licensing mode correctly (per user) and also have setup to allow all remote connections. I get the following in the security event log: Log Name: Security Source: Microsoft-Windows-Security-Auditing Date: 28/06/2012 12:01:16 Event ID: 4656 Task Category: File System Level: Information Keywords: Audit Failure User: N/A Computer: 0BraApps1.brenntagLA.hou Description: A handle to an object was requested. Subject: Security ID: BRENNTAGLA\jaadmin Account Name: jaadmin Account Domain: BRENNTAGLA Logon ID: 0xbbe3f Object: Object Server: Security Object Type: File Object Name: C:\Windows\System32\ServerManager.msc Handle ID: 0x0 Process Information: Process ID: 0x60c Process Name: C:\Windows\System32\mmc.exe Access Request Information: Transaction ID: {00000000-0000-0000-0000-000000000000} Accesses: READ_CONTROL SYNCHRONIZE WriteData (or AddFile) AppendData (or AddSubdirectory or CreatePipeInstance) WriteEA ReadAttributes WriteAttributes Access Reasons: READ_CONTROL: Granted by D:(A;;0x1200a9;;;BA) SYNCHRONIZE: Granted by D:(A;;0x1200a9;;;BA) WriteData (or AddFile): Not granted AppendData (or AddSubdirectory or CreatePipeInstance): Not granted WriteEA: Not granted ReadAttributes: Granted by ACE on parent folder D:(A;;0x1301bf;;;BA) WriteAttributes: Not granted Access Mask: 0x120196 Privileges Used for Access Check: - Restricted SID Count: 0 Event Xml: 4656 1 0 12800 0 0x8010000000000000 1535565 Security 0BraApps1.brenntagLA.hou S-1-5-21-205301047-3902605089-2438454170-21511219 jaadmin BRENNTAGLA 0xbbe3f Security File C:\Windows\System32\ServerManager.msc 0x0 {00000000-0000-0000-0000-000000000000} %%1538 %%1541 %%4417 %%4418 %%4420 %%4423 %%4424 %%1538: %%1801 D:(A;;0x1200a9;;;BA) %%1541: %%1801 D:(A;;0x1200a9;;;BA) %%4417: %%1805 %%4418: %%1805 %%4420: %%1805 %%4423: %%1811 D:(A;;0x1301bf;;;BA) %%4424: %%1805 0x120196 - 0 0x60c C:\Windows\System32\mmc.exe Any ideas?

    Read the article

  • What is AddType application/x-httpd-php-source

    - by egor
    I have the apache2.0, PHP5.2.4 and the directive in the httpd.conf: AddType application/x-httpd-php-source .php .php3 .php4 .php5 .php6 AddType directive is used to maps the given filename extensions onto the specified content type. This is the only meaning of this directive. But why does this method switch off PHP handler, that assigned .php extensions, and I can view source code of scripts in my browser? And another: AddType application/x-httpd-php5 .php Why does this method switch on PHP handler? This simply must send header "Content-Type: application/x-httpd-php5" to my browser and this must be only meaning of directive AddType from mod_mime. I'm confused. Thanks for your replies.

    Read the article

  • Can I nohup/screen an already-started process?

    - by ojrac
    I'm doing some test-runs of long-running data migration scripts, over SSH. Let's say I start running a script around 4 PM; now, 6 PM rolls around, and I'm cursing myself for not doing this all in screen. Is there any way to "retroactively" nohup a process, or do I need to leave my computer online all night? If it's not possible to attach screen to/nohup a process I've already started, then why? Something to do with how parent/child proceses interact? (I won't accept a "no" answer that doesn't at least address the question of 'why' -- sorry ;) )

    Read the article

  • How do I reference the value of a constructed environment variable in a loop?

    - by Rob Spieldenner
    What I'm trying to do is loop over environment variables. I have a number of installs that change and each install has 3 IPs to push files to and run scripts on, and I want to automate this as much as possible (so that I only have to modify a file that I'll source with the environment variables). The following is a simplified version that once I figure out I can solve my problem. So given in my.props: COUNT=2 A_0=foo B_0=bar A_1=fizz B_1=buzz I want to fill in the for loop in the following script #!/bin/bash . <path>/my.props for ((i=0; i < COUNT; i++)) do <script here> done So that I can get the values from the environment variables. Like the following(but that actually work): echo $A_$i $B_$i or A=A_$i B=B_$i echo $A $B returns foo bar then fizz buzz

    Read the article

  • Using Gentoo's `ebegin`, `eend` etc under Ubuntu

    - by Marcus Downing
    We're quite fond of the style of the ebegin, eend, eerror, eindent etc commands used by Portage and other tools on Gentoo. The green-yellow-red bullets and standard layout make for very quick spotting of errors, on what would otherwise be very grey command line output. #!/bin/sh source /etc/init.d/functions.sh ebegin "Copying data" rsync .... eend $? Producing output similar to: * Copying data... [ OK ] As a result we're using these commands in some of our common shell scripts, which is a problem for the people using Ubuntu and other linuxes. (linuces? linuxen? linucae? other distros) On Gentoo these functions are provided by OpenRC, and imported with functions.sh file (whose exact position seems to vary slightly). But is there a simple way of getting these commands on Ubuntu? In theory we could replace them all with dull echos, but we'd rather not?

    Read the article

  • Drupal + Lighttpd: enabling clean urls (rewriting)

    - by Patrick
    I'm emulating Ubuntu on my mac, and I use it as a server. I've installed lighttpd + Drupal and the following configuration section requires a domain name in order to make clean urls to work. Since I'm using a local server I don't have a domain name and I was wondering how to make it work given the fact the ip of the local machine is usually changing. thanks $HTTP["host"] =~ "(^|\.)mywebsite\.com" { server.document-root = "/var/www/sites/mywebsite" server.errorlog = "/var/log/lighttpd/mywebsite/error.log" server.name = "mywebsite.com" accesslog.filename = "/var/log/lighttpd/mywebsite/access.log" include_shell "./drupal-lua-conf.sh mywebsite.com" url.access-deny += ( "~", ".inc", ".engine", ".install", ".info", ".module", ".sh", "sql", ".theme", ".tpl.php", ".xtmpl", "Entries", "Repository", "Root" ) # "Fix" for Drupal SA-2006-006, requires lighttpd 1.4.13 or above # Only serve .php files of the drupal base directory $HTTP["url"] =~ "^/.*/.*\.php$" { fastcgi.server = () url.access-deny = ("") } magnet.attract-physical-path-to = ("/etc/lighttpd/drupal-lua-scripts/p-.lua") }

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • Can I get a domain controller not to act as DNS for the members?

    - by rsw
    Hi, Let me try to explain my current setup. I have one linux machine acting as DHCP and DNS (dhcpd3 and bind) in my network. This works fine, all computers I hook up to the network gets an IP address and proper DNS servers set. Let's call it 10.12.0.10 However, we also have a Windows Server 2003 Domain Controller in our network to which we add our Windows computers (running XP), let's call it 10.12.0.20. I noticed that when I run 'nslookup' on one of the windows machines, it says that the primary DNS is 10.12.0.20. This have not been much of a problem since: The Windows clients are stationary The Windows server in itself point out my real DHCP/DNS, since I can reach everything specified in it However, this turns out to be a problem when we use Laptops. They connect to the domain here and gets a DNS server, but when the user travels or connect the computer from home, we hit a problem. They are connected to their internet, but their DNS is 10.12.0.20 which they can't reach since they're at home and not at the office network. I solved this by removing the register key called "NameServer" with the value 10.12.0.20, but it gets set again whenever they logon to the domain the next time (when they get back to the office). Can I somehow make the computers take whatever DNS server they are handed when connecting to the internet or a home network, instead of always trying to reach the Domain Controller?

    Read the article

  • Default gateway is in different subnet. How to configure in RHEL6.2

    - by Dmytro Leonenko
    I have two subnets routed to my server from ISP. I have only one gateway ip. The gateway is on the same VLAN as my IP address. For example netowrk 1 is 1.0.0.0/24 and network 2 is 2.0.0.0/24. Both are routed to eth0 by my ISP. Gateway is 1.0.0.1. My host ip is 2.0.0.1/24 (eth0) So I can configure default gateway manually with ip route add default dev eth0 ip route add default via 1.0.0.1 and then internet connection works properly. How do I configure it in /etc/sysconfig/network-scripts/ifcfg-eth0 ? I tried to set GATEWAY=1.0.0.1 but it doesn't work. Tried to set GATEWAY and GATEWAYDEV in /etc/sysconfig/network and it does only what first command from listing above do.

    Read the article

  • Change Directory Browsing Page in IIS 7.5

    - by Gabriel Ryan Nahmias
    NOTE: This post is tagged ASP Classic but really that's just one of the languages in which I could write it. I really need assistance with configuring IIS (7.5). I have found many scripts and ideas to effect this but I require that it's not be a "drop-in" replacement, as in it must work globally for any possibly directory from one codebase. Here are several links related to this goal: http://mvolo.com/get-nice-looking-directory-listings-for-your-iis-website-with-directorylistingmodule: Best example of what I want and the one with which I can't seem to follow through. http://www.daleanderson.ca/edb/: This is an example of a "drop-in" replacement (at least it's oriented for that purpose). It still has viable code that could be useful to serve as the main file that processes directory traversal.

    Read the article

  • User authentication -- username mismatch in IIS in ASP.NET application

    - by Cory Larson
    Last week, an employee's Active Directory username was changed (or a new one was created for them). For the purposes of this example, let's assume these usernames: Old: Domain\11111 New: Domain\22222 When this user now logs in using their new username, and attempts to browse to any one of a number of ASP.NET applications using only Windows Authentication (no Anonymous enabled), the system authenticates but our next layer of database-driven permissions prevents them from being authorized. We tracked it down to a mismatch of usernames between their logon account and who IIS thinks they are. Below are the outputs of several ASP.NET variables from apps running in a Windows 2008 IIS7.5 environment: Request.ServerVariables["AUTH_TYPE"]: Negotiate Request.ServerVariables["AUTH_USER"]: Domain\11111 Request.ServerVariables["LOGON_USER"]: Domain\22222 Request.ServerVariables["REMOTE_USER"]: Domain\11111 HttpContext.Current.User.Identity.Name: Domain\11111 System.Threading.Thread.CurrentPrincipal.Identity.Name: Domain\11111 From the above, I can see that only the LOGON_USER server variable has the correct value, which is the account the user used to log on to their machine. However, we use the "AUTH_USER" variable for looking up the database permissions. In a separate testing environment (completely different server: Windows 2003, IIS6), all of the above variables show "Domain\22222". So this seems to be a server-specific issue, like the credentials are somehow getting cached either on their machine or on the server (the former seems more plausible). So the question is: how do I confirm whether it's the user's machine or the server that is botching the request? How should I go about fixing this? I looked at the following two resources and will be giving the first one a try shortly: http://www.interworks.com/blogs/jvalente/2010/02/02/removing-saved-credentials-passwords-windows-xp-windows-vista-or-windows-7 http://stackoverflow.com/questions/2325005/classic-asp-request-servervariableslogon-user-returning-wrong-username/5299080#5299080 Thanks.

    Read the article

  • What are some of the best answer file settings for a WDS Deployment?

    - by drpcken
    I've had my head buried in answer files for days now and have gotten quite comfortable setting them up, test, etc... I use a handful of Components to help my migrations, for my unattend.xml I like: Windows-International-Core-WinPE -- this is good for setting Locales the preboot environment (en-us for us english US speakers). Keeps me from having to set these on the initial image boot. Windows-Setup_neutral -- I like the WindowsDeploymentServices -> ImageSelection, especially if I'm only pushing a single image. This keeps me from having to select it each time. My OOBE_Unattend.xml is really useful and I barely have to touch anything during this part of the installation: Windows-Shell-Setup_neutral -- This lets me put a ProductKey in for my MAK volume license (very useful and time saving). I can also set the TimeZone for the installation. Windows UnattendedJoin_neutral -- I couldn't live without this component. It joins the machine on my domain before logging in as a domain administrator. I would hate to not have this ability. Windows-International-Core -- Again this component really speeds up the OOBE process. I configure my locals and time zone so I don't have to do it by hand when the machine enteres OOBE. Windows-Shell-Setup -- Allows you to configure an autologon when the new machine is finished. I like to logon as a domain admin automatically for customizing and troubleshooting the new machine immediately after it is imaged. Also the OOBE component under here lets me skip the EULA, Hide Wireless Setup, and set my default NetworkLocation. All of this makes the entire OOBE totally automated. What are some other good components I am missing as far as helping me get these images pushed and configured as quickly as possible?

    Read the article

  • What is the proper way to report a bug against the upgrade process from Ubuntu 9.10 to 10.4?

    - by Kim
    I just upgraded to lucid and discovered a nasty bug. It prevents the system from booting and took me hours to resolve. Now I'd like to report it along with the workaround I found. The only problem is: Where? Other such bugs have been filed against "update-manager", but that's just the GUI calling some scripts which do the real work. so what do I do? What should I substitute for XYZ in ubuntu-bug XYZ ?

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • Having trouble mapping Sharepoint document library as a Network Place

    - by Sdmfj
    I am using Office 365, Sharepoint Online 2013. Using Internet Explorer these are the steps I have taken: ticked the keep me signed in on the portal.microsoftonline.com page. It redirects me to Godaddy login page because Office 365 was purchased through them. I have added these sites to trusted sites (as well as every page in the process) and chose auto logon in Internet explorer. Once on the document library I open as explorer and copy the address as text. I go to My Computer and right click to add a network place and paste in the document library address. It successfully adds the library as a network place 30% of the time. I can do this same process 3 times in a row and it will fail the first 2 times and then succeeds. It works for a little while and then I get an error that the DNS cannot be found. I need multiple users in our organization to be able to access this document library as if it was a mapped network drive on our local network. Is there an easier way to do this? I may just sync using the One Drive app but thought that direct access to the files without worrying about users keeping their files synced.

    Read the article

  • Reliable Backup Solution for Linux for Complete System Restoration

    - by Chris S
    What's the best backup solution for Linux that can completely restore the entire filesystem to a blank harddrive (including partitioning) after an old harddrive dies? I'm currently running a few Ubuntu machines, some with RAID-1 and others without RAID (mostly laptops). I'd like to implement a backup solution that can take incremental snapshots of the entire filesystem, so that if I were to replace all the harddrives in a machine, I could use the backup to restore a perfect copy of the previous filesystem. Unfortunately, nearly all the backup solutions I've found seem to be glorified rsync scripts, which only backup some files, and have no easy way to restore once the entire filesystem is gone. Some of the more complicated solutions, like Bacula, might do what I need, but require a complicated server/client setup and are notoriously difficult to maintain. I've heard that Apple's TimeMachine utility has this ability, and I've had similar success taking differential disk images with Acronis True Image on Windows, but of course neither of these work on Linux. Is there anything comparable for Ubuntu?

    Read the article

  • Apache configuration to accept all data

    - by ServerDown
    Hi, I have apache running on port 7979 to talk with a device that sends data to webserver and later will run php scripts to process and send reply xml. The problem now is that it sends data like POST HTTP/1.1 Content-Type:text/xml Content-Length:369 Followed by XML When apache sees this it gives a 400 error. Since the device cannot be changed is there any way to accept the full data sent from the device and write to some log? Currently apache simply keeps sending 400 errors back. If there was a way to log the entire xml or create some custom handler for 400 error then the xml could be read by a php script. Looking forward to solutions.

    Read the article

  • Picking a degree path...

    - by Chris
    I'll be going to University of South Florida soon, and have to choose between two degrees, I want to head into general Server (IT) administration for a small / medium business. Setting up computers, imaging, managing file servers / logon servers /etc. * I had to change the http to hxxp in order to post. I have two degrees I'm currently choosing between: - BSAS hxxp://www.poly.usf.edu/Academics/AppliedAS/BSAS-IT/Program_of_Study.html - BSIT hxxp://www.poly.usf.edu/IT/ I like the idea of a BSAS because it'll get me out sooner, and then I can work on a few certifications to "match" the BSIT... I'm just worried companies will look at that as a "lesser" degree to a BSIT (or even a CS degree.) What are your guys' thoughts on these two degrees? The BSIT has more math, which I still have about 2 more classes to go through (I'll be heading to USF this August.) while the BSIT doesn't require those 2 extra math classes. I keep on hearing from people that when they hire you for your first job, they don't care which degree you have, as long as it's relevant and it's a 4-year degree, is this true?

    Read the article

  • Run script when shutting down ubuntu before the logged in user is logged out

    - by Travis
    I'm writing a script to backup some local directories on a unix machine (Ubuntu) to a samba drive. The script works fine and I've got it running at shutdown and restart using the method described at http://en.kioskea.net/faq/3348-ubuntu-executing-a-script-at-startup-and-shutdown It works by placing the backup script into the /etc/rc6.d and /etc/rc0.d directories. However there is a problem. After looking at the scripts logfile it seems to be run after the user is logged out. We are using LDAP authentication and when the user logs out, the system cannot backup to their samba share. Does anyone know of anyway to run the script before the user is logged out?

    Read the article

  • Setup secure shared hosting (Apache, PHP, MySQL)

    - by Apaz
    So I'm setting up a shared hosting with Apache, PHP, MySQL and the biggest question mark is how to do with PHP, since there is a million options out there how to configure it securely. The plan is: Chroot for MySQL (built in support for chroot) Chroot for Apache (mod_security) Each user executing their PHP-scripts as their own user (see below) Set open_basedir Disable all "evil" php-functions (allow_url_fopen, system, exec, and so on) Ive looked at suexec and suphp but they seems very slow; http://blog.stuartherbert.com/php/2007/12/18/using-suexec-to-secure-a-shared-server/ http://blog.stuartherbert.com/php/2008/01/18/using-suphp-to-secure-a-shared-server/ So I've looked some more and found some other solutions: apache2-mpm-itk + mod_php(?) mod_fcgid + php-fpm mod_fastcgi + php-fpm Ive tried a simple setup with mod_fastcgi + php-fpm and it seems to work, runs as correct user and so on, but the protection against directory traveling is still open_basedir(?) One solution for that could be to use php-fpm's chroot option, but that causes a lot of other issues like domain name resolver does not work sending mail does not work Tips?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >