Search Results

Search found 17950 results on 718 pages for 'directory listing'.

Page 367/718 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • PEAR:DB connection parameters

    - by Markus Ossi
    I just finished my first PHP site and now I have a security-related question. I used PEAR:DB for the database connection and made a separate parameter file for it. How should I hide this parameter file? I found a guide (http://www.kitebird.com/articles/peardb.html) that says: Another way to specify connection parameters is to put them in a separate file that you reference from your main script. ... It also enables you to move the parameter file outside of the web server's document tree, which prevents its contents from being displayed literally if the server becomes misconfigured and starts serving PHP scripts as plain text. I have now put my file in a directory like this /include/db_parameters.inc However, if I go to this URL, the web server shows me the contents of the file including my database username and password. From what I've understood, I should protect this file so, that even though PHP would be served as text, nobody could read this. What does outside of web server's document tree mean here? Put the PHP file out of public_html directory altogether deeper into the server file system? Some CHMOD?

    Read the article

  • Why do I get a security warning in visual studio 2008 when creating a project?

    - by MikeG
    This is the error, it's basically a security warning (And here's the text grabbed off the dialog box) Security Warning for WindowsApplication4 __________________________I The WindowsApplication4 project file has been customized and could present a security risk by executing custom build steps when opened in Microsoft Visual Studio. If this project came from an untrustwoithy source, it could cause damage to your computer or compromise your private information. More Details Project load options 0 Load project for browsing Opens the project in Microsoft Visual Studio with increased security. This option allows you to browse the contents of the project, but some functionality, such as IntelliSense, is restricted, When a project is loaded for browsing, actions such as building, cleaning, publishing, or opening designers could still remain unsafe. Load project normally Opens the project normally in Microsoft Visual Studio. Use this option if you trust the source and understand the potential risks involved. Microsoft Visual Studio does not restrict any project functionality and will not prompt you again for this project. Ask me for every project in this solution OK L Cancel When click the more details button get this: Microsoft Visual Studio __ An item referring to the file was found in the project file “C:\Users\mgriffiths\Documents\Visual Studio 2008\ProjectATemp\Win dowsApplication4\WindowsApplicdtion4\W in dowsApplication4.vbproj”. Since this file is located within a system directory, root directory, or network share, it could be harmful to write to this file. OK

    Read the article

  • Linux RAID: Replacing Failed Drive...permanantly

    - by user137519
    Okay, odd question here. I have a server with RAID 5. A drive failed, in a really physically in a really odd way. On the machine it boots and is seen by the BIOS but...no partition can be seen on the drive consistantly (in and out). 2 out of 3 drives working...I made new spare disk and added it, RAID 5 rebuilt clean. All appears well but...when I reboot it keeps trying to use the 2nd drive which doesn't give any partition data, so of course the RAID 5 gets 2 out of 3...again. The status of my drive is as follows: /dev/sda2:Good /dev/sdb2 (drive has physical problem so no partition data) bad, /dev/sdc2:good /dev/sdd2:good. Every time I reboot the mdadm system seems to keep trying to use /dev/sdb which has physical failure (although spins and is detected). /dev/sdd is the new drive I created. I added /dev/sdd to the raid and it rebuilds the raid but this action isn't memorized upon reboot so it keeps listing /dev/sda and /dev/sdc but doesn't use the perfectly good /dev/sdd until I re-add manually. I've tried removing the dead drive with the mdadm tool, but as it cannot see /dev/sdb paritions it will not fail or remove it (says partition doesn't exist). the /etc/mdadm.conf was automatically made on the original OS install which only lists: DEVICE partitions MAILADDR root ARRAY /dev/md2 super-minor=2 ARRAY /dev/md0 super-minor=0 ARRAY /dev/md1 super-minor=1 Basically just the raids to use on boot. I need to remove this semi-dead drive (/dev/sdb) but I'd prefer to know why this is happening before I do. any ideas or suggestions. I supposed I could attempt to clone/replace /dev/sdb (the partitions on drive show up, then disappear shortly after) but given the partition "chester cat" behaviour this seems risky to me and as I have a working "spare" it seems unnecessary. Thanks in advance for your insight.

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • GPG error occurs while using "deb file:/local-path-to-repo ..." in /etc/apt/sources.list

    - by Chandler.Huang
    I need to install packages within non-internet connection environment. My plan is to download dist structure from Internet and then add file path to /etc/apt/sources.list. So I download related structure includes ubunt/dists/precise, precise-backports, precise-proposed, precise-security, precise-updates from a ftp mirror server. And then I remove original source and add the following to my /etc/apt/sources.list. deb file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe deb-src file:path-to-local-ubuntu-directory/ precise main restricted multiverse universe Then I got GPG error as following after apt-get update. root@openstack:/~# apt-get update Ign file: precise InRelease Get:1 file: precise Release.gpg [198 B] Get:2 file: precise Release [50.1 kB] Ign file: precise Release Get:3 file: precise/main TranslationIndex [3,761 B] Get:4 file: precise/multiverse TranslationIndex [2,716 B] Get:5 file: precise/restricted TranslationIndex [2,636 B] Get:6 file: precise/universe TranslationIndex [2,965 B] Reading package lists... Done W: GPG error: file: precise Release: The following signatures were invalid: BADSIG 0976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> I had tried use the following steps after google but in vain. sudo apt-get clean cd /var/lib/apt sudo mv lists lists.old sudo mkdir -p lists/partial sudo apt-get update Is there any way to resolve this? And why this error occurs? Thanks a lot.

    Read the article

  • Create a new webpage using a WCF service

    - by sweeney
    Hello, I'd like to create WCF operation contract which consumes a string, produces a public webpage and then returns the address of this new page. [ServiceContract(Namespace = "")] public class DatabaseService { [OperationContract] public string BuildPage(string fileName, string html) { //writes html to file //returns public file url } } This doesn't seem like it should be complicated but i cant figure out how to do it. So far what i've tried is this: [OperationContract] public string PrintToFile(string name, string text) { FileInfo f = new FileInfo(name); StreamWriter w = f.AppendText(); w.Write(text); w.Close(); return f.Directory.ToString(); } Here's the problem. This does not create a file in the web root, it creates it in the directory where the webdav server is running. When i run this on an IIS server it seems to do nothing at all (at least not that i can tell). How can I get a handle to the webroot programmatically so that i can place the resultant file there and then return the public URL? I can tack on the domain name after the fact without issue so if it only returns the relative path to the file that's fine. Thanks, brian

    Read the article

  • How to compare two file structures in PHP?

    - by OM The Eternity
    I have a function which gives me the complete file structure upto n-level, function getDirectory($path = '.', $ignore = '') { $dirTree = array (); $dirTreeTemp = array (); $ignore[] = '.'; $ignore[] = '..'; $dh = @opendir($path); while (false !== ($file = readdir($dh))) { if (!in_array($file, $ignore)) { if (!is_dir("$path/$file")) { //display of file and directory name with their modification time $stat = stat("$path/$file"); $statdir = stat("$path"); $dirTree["$path"][] = $file. " === ". date('Y-m-d H:i:s', $stat['mtime']) . " Directory == ".$path."===". date('Y-m-d H:i:s', $statdir['mtime']) ; } else { $dirTreeTemp = getDirectory("$path/$file", $ignore); if (is_array($dirTreeTemp))$dirTree = array_merge($dirTree, $dirTreeTemp); } } } closedir($dh); return $dirTree; } $ignore = array('.htaccess', 'error_log', 'cgi-bin', 'php.ini', '.ftpquota'); //function call $dirTree = getDirectory('.', $ignore); //file structure array print print_r($dirTree); Now here my requirement is , I have two sites The Development/Test Site- where i do testing of all the changes The Production Site- where I finally post all the changes as per test in development site Now, for example, I have tested an image upload in the Development/test site, and i found it appropriate to publish on Production site then i will completely transfer the Development/Test DB detail to Production DB, but now I want to compare the files structure as well to transfer the corresponding image file to Production folder. There could be the situation when I update the image by editing the image and upload it with same name, now in this case the image file would be already present there, which will restrict the use of "file_exist" logic, so for these type of situations....HOW CAN I COMPARE THE TWO FILE STRUCTURE TO GET THE SYNCHRONIZATION DONE AS PER REQUIREMENT??

    Read the article

  • PHP.ini Settings Are Ignored By PHP5.3.5 Running With Windows 7 And Apache 2.2.15

    - by Andy
    I did an install of PHP5.3.5 on Windows 7 Home Premium using the MSI installer download. I got it to overwrite a previous version of PHP5 in C:\php5\ When first testing it, the server failed to start. I fixed this by adding the path to PHP in the Apache2.2 httpd file where the installer had inserted 2 lines of coded pointing to the ini file directory and the PHP DLL but had left out the directory path. After doing this, the server starts ok and I can run phpinfo to view the PHP settings in my web browser on local host. In the phpinfo it states that the loaded configuration file is C:\php5\php.ini as expected. But if I make any changes to the settings, and reboot the server, none of the changes are reflected in phpinfo. Yes, I do refresh the browser window. If I rename the php.ini to something else to make it invisible phpinfo then correctly identifies that there is no php.ini file loaded. So the settings in php.ini are being ignored and some default settings are being used (but I have no idea where these are derived from). As far as I can tell, there are no other php.ini files on my computer. In phpinfo it states that the Configuration File (php.ini) Path is C:\Windows but this is the same as on a Windows XP computer that I work on. And in the windows folder I don't see any php.ini file. In the windows registry, there is no mention of PHP5, and the PATH environment variable starts with C:\php5\; So hopefully someone can suggest how I can get PHP5 to take notice of the C:\php5\php.ini settings. :)

    Read the article

  • Read/Write Files from the Content Provider

    - by drum
    I want to be able to create a file from the Content Provider, however I get the following error: java.io.Filenotfoundexception: /0: open file failed: erofs (read-only file system) What I am trying to do is create a file whenever an application calls the insert method from my Provider. This is the excerpt of the code that does the file creation: FileWriter fstream = new FileWriter(valueKey); BufferedWriter out = new BufferedWriter(fstream); out.write(valueContent); out.close(); Originally I wanted to use openFileOutput() but the function appears to be undefined. Anyone has a workaround to this problem? EDIT: I found out that I had to specify the directory as well. Here is a more complete snippet of the code: File file = new File("/data/data/Project.Package.Structure/files/"+valueKey); file.createNewFile(); FileWriter fstream = new FileWriter(file); BufferedWriter out = new BufferedWriter(fstream); out.write(valueContent); out.close(); I also enabled the permission <uses-permission android:name="android.permission.WRITE_INTERNAL_STORAGE" /> This time I got an error saying: java.io.IOException: open failed: ENOENT (No such file or directory)

    Read the article

  • How to invoke make install for one subdirectory of Qt project

    - by chalup
    I'm working on custom library and I wish users could just use it by adding: CONFIG += mylib to their pro files. This can be done by installing mylib.prf file to %QTDIR%/mkspec/features. I've checked out in Qt Mobility project how to create and install such file, but there is one thing I'd like to do differently. If I correctly understood the pro/pri files of Qt Mobility, inside the example projects they don't really use CONFIG += mobility, instead they add QtMobility sources to include path and share the *.obj directory with main library project. For my library I'd like to have examples that are as independent projects as possible, i.e. projects that can be compiled from anywhere once MyLib is compiled and installed. I have following directory structure: mylib | |- examples |- src |- tests \- mylib.pro It seems that the easiest way to achieve what I described above is creating mylib.pro like this: TEMPLATE = subdirs SUBDIRS += src SUBDIRS += examples tests:SUBDIRS += tests And somehow enforce invoking "cd src && make install" after building src. What is the best way to do this? Of course any other suggestions for automatic library deployment before examples compilation are welcome.

    Read the article

  • Writing a plist

    - by iOS-Newbie
    I am trying to test out writing a dictionary to a plist. The following code does not report any errors, but I cannot find any trace of the file that I supposed wrote. Here is the code snippet: NSDictionary *myDictionary = [NSDictionary dictionaryWithObjectsAndKeys: @"First letter of the alphabet", @"A", @"Second letter of the alphabet", @"B", @"Third letter of the alphabet", @"C", nil ]; I can see the dictionary contents displayed properly with either method calls: NSLog(@"Here is my partial dictionary %@", myDictionary); for (NSString *key in myDictionary) NSLog(@"here it is again %@ %@", key, [myDictionary objectForKey:key]); The following code displays the "succeeded" message when the program is run repeatedly if ([myDictionary writeToFile: @"myDictionary" atomically:YES ] == NO) NSLog(@"write to file failed"); else NSLog(@"write to file succeeded"); even when changing the atomically: argument to NO to not write a temporary file. However, when I search my current directory, or even my entire Mac, I cannot find any file called "myDictionary.plist" or any file with the string "myDictionary". Isn't the path variable "@myDictionary" supposed to represent the file at the current directory, i.e. where the class executable resides?

    Read the article

  • PHP file outside doc root needs files outside and inside the document root

    - by jax
    I have a library of classes, all interrelated. Some files are inside the document root and some are outside using the <Directory> and Alias features in httpd.conf Assuming I have 3 files: webroot.php (Inside the document root) alias_directory.php (Inside a folder outside the doc root) alias_directory2.php (Inside a **different** folder outside the doc root) If alias_directory2.php needs both webroot.php and alias_directory.php, This does not work. (Remember alias_directory.php and alias_directory2.php are not in the same locations) require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once $_SERVER['DOCUMENT_ROOT'].'/alias_directory.php'; //(not ok) This does not work because alias_directory.php is not in the doc root. Similarly require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once dirname(__FILE__).'/alias_directory.php'; //(not ok) The problem here is that dirname(__FILE__) will return the path for alias_directory2.php not alias_directory.php. This works: require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once '/full/path/to/directory/alias_directory.php'; //(ok) But is very nasty and is a maintenance nightmare if I decide to move my library to another location. How do I solve this problem, is seems that I need a way to resolve an Alias folder properly.

    Read the article

  • ASP.NET Image Upload Parameter Not Valid. Exception

    - by pennylane
    Hi Guys, Im just trying to save a file to disk using a posted stream from jquery uploadify I'm also getting Parameter not valid. On adding to the error message so i can tell where it blew up in production im seeing it blow up on: var postedBitmap = new Bitmap(postedFileStream) any help would be most appreciated public string SaveImageFile(Stream postedFileStream, string fileDirectory, string fileName, int imageWidth, int imageHeight) { string result = ""; string fullFilePath = Path.Combine(fileDirectory, fileName); string exhelp = ""; if (!File.Exists(fullFilePath)) { try { using (var postedBitmap = new Bitmap(postedFileStream)) { exhelp += "got past bmp creation" + fullFilePath; using (var imageToSave = ImageHandler.ResizeImage(postedBitmap, imageWidth, imageHeight)) { exhelp += "got past resize"; if (!Directory.Exists(fileDirectory)) { Directory.CreateDirectory(fileDirectory); } result = "Success"; postedBitmap.Dispose(); imageToSave.Save(fullFilePath, GetImageFormatForFile(fileName)); } exhelp += "got past save"; } } catch (Exception ex) { result = "Save Image File Failed " + ex.Message + ex.StackTrace; Global.SendExceptionEmail("Save Image File Failed " + exhelp, ex); } } return result; }

    Read the article

  • problem finding a header with a c++ makefile

    - by Max
    Hi. I've started working with my first makefile. I'm writing a roguelike in C++ using the libtcod library, and have the following hello world program to test if my environment's up and running: #include "libtcod.hpp" int main() { TCODConsole::initRoot(80, 50, "PartyHack"); TCODConsole::root->printCenter(40, 25, TCOD_BKGND_NONE, "Hello World"); TCODConsole::flush(); TCODConsole::waitForKeypress(true); } My project directory structure looks like this: /CppPartyHack ----/libtcod-1.5.1 # this is the libtcod root folder --------/include ------------libtcod.hpp ----/PartyHack --------makefile --------partyhack.cpp # the above code (while we're here, how do I do proper indentation? Using those dashes is silly.) and here's my makefile: SRCDIR = . INCDIR = ../libtcod-1.5.1/include CFLAGS = $(FLAGS) -I$(INCDIR) -I$(SRCDIR) -Wall CC = gcc CPP = g++ .SUFFIXES: .o .h .c .hpp .cpp $(TEMP)/%.o : $(SRCDIR)/%.cpp $(CPP) $(CFLAGS) -o $@ -c $< $(TEMP)/%.o : $(SRCDIR)/%.c $(CC) $(CFLAGS) -o $@ -c $< CPP_OBJS = $(TEMP)partyhack.o all : partyhack partyhack : $(CPP_OBJS) $(CPP) $(CPP_OBJS) -o $@ -L../libtcod-1.5.1 -ltcod -ltcod++ -Wl,-rpath,. clean : \rm -f $(CPP_OBJS) partyhack I'm using Ubuntu, and my terminal gives me the following errors: max@max-desktop:~/Desktop/Development/CppPartyhack/PartyHack$ make g++ -c -o partyhack.o partyhack.cpp partyhack.cpp:1:23: error: libtcod.hpp: No such file or directory partyhack.cpp: In function ‘int main()’: partyhack.cpp:5: error: ‘TCODConsole’ has not been declared partyhack.cpp:6: error: ‘TCODConsole’ has not been declared partyhack.cpp:6: error: ‘TCOD_BKGND_NONE’ was not declared in this scope partyhack.cpp:7: error: ‘TCODConsole’ has not been declared partyhack.cpp:8: error: ‘TCODConsole’ has not been declared make: * [partyhack.o] Error 1 So obviously, the makefile can't find libtcod.hpp. I've double checked and I'm sure the relative path to libtcod.hpp in INCDIR is correct, but as I'm just starting out with makefiles, I'm uncertain what else could be wrong. My makefile is based off a template that the libtcod designers provided along with the library itself, and while I've looked at a few online makefile tutorials, the code in this makefile is a good bit more complicated than any of the examples the tutorials showed, so I'm assuming I screwed up something basic in the conversion. Thanks for any help.

    Read the article

  • How to import a module from PyPI when I have another module with the same name

    - by kuzzooroo
    I'm trying to use the lockfile module from PyPI. I do my development within Spyder. After installing the module from PyPI, I can't import it by doing import lockfile. I end up importing anaconda/lib/python2.7/site-packages/spyderlib/utils/external/lockfile.py instead. Spyder seems to want to have the spyderlib/utils/external directory at the beginning of sys.path, or at least none of the polite ways I can find to add my other paths get me in front of spyderlib/utils/external. I'm using python2.7 but with from __future__ import absolute_import. Here's what I've already tried: Writing code that modifies sys.path before running import lockfile. This works, but it can't be the correct way of doing things. Circumventing the normal mechanics of importing in Python using the imp module (I haven't gotten this to work yet, but I'm guessing it could be made to work) Installing the package with something like pip install --install-option="--prefix=modules_with_name_collisions" package_name. I haven't gotten this to work yet either, but I'm guess it could be made to work. It looks like this option is intended to create an entirely separate lib tree, which is more than I need. Source Using pip install --target=lockfile_from_pip. The files show up in the directory where I tell them to go, but import doesn't find them. And in fact pip uninstall can't find them either. I get Cannot uninstall requirement lockfile-from-pip, not installed and I guess I will just delete the directories and hope that's clean. Source So what's the preferred way for me to get access to the PyPI lockfile module?

    Read the article

  • NoClassDefFoundError when trying to reference external jar files

    - by opike
    I have some 3rd party jar files that I want to reference in my tomcat web application. I added this line to catalina.properties: shared.loader=/home/ollie/dev/java/googleapi_samples/gdata/java/lib/*.jar but I'm still getting this error: org.apache.jasper.JasperException: javax.servlet.ServletException: java.lang.NoClassDefFoundError: com/google/gdata/util/ServiceException org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:491) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:401) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) I verified that the com.google.gdata.util.ServiceException is in the gdata-core-1.0.jar file which is in the directory: /home/ollie/dev/java/googleapi_samples/gdata/java/lib I did bounce tomcat after I modified catalina.properties. Update 1: I tried copying the gdata-core-1.0.jar file into /var/lib/tomcat6/webapp/examples/WEB-INF/lib as a test but that didn't fix the problem either. Update 2: It actually does work when I copy the jar file directly into the WEB-INF/lib directory. There was a permissions issue that I had to resolve. But it's still not working when I use the shared.loader setting. I reconfirmed that the path is correct.

    Read the article

  • Exploiting Path Traversal Vulnerability

    - by Maputo
    I have a Java Web App running on Tomcat on which I'm supposed to exploit Path traversal vulnerability. There is a section (in the App) at which I can upload a .zip file, which gets extracted in the server's /tmp directory. The content of the .zip file is not being checked, so basically I could put anything in it. I tried putting a .jsp file in it and it extracts perfectly. My problem is that I don't know how to reach this file as a "normal" user from browser. I tried entering ../../../tmp/somepage.jsp in the address bar, but Tomcat just strips the ../ and gives me http://localhost:8080/tmp/ resource not available. Ideal would be if I could somehow rename the somepage.jsp so that it gets extracted in the web directory of the Web App. But then, the Linux filesystem disallows slashes in filenames (e.g. ../../home/webapp/somepage.jsp). Are there maybe any escape sequences that would translate to / after extracting? Any ideas would be highly appreciated. Note: This is a school project in a Security course where I'm supposed to locate vulnerabilities and correct them. Not trying to harm anyone...

    Read the article

  • What is the right path for PHP includes on a Mac?

    - by skorned
    Running Mac OS X 10.5.8, with PHP 5.2.11 Pre-installed. Using Coda 1.6.10. I'm writing PHP files, and then preview them running from file, not server. This was working fine till I tried PHP includes. These don't work as a relative path, only as an absolute from the root of the drive. Is there any way I can use statements like include_once "common/header.php"; without specifying my entire file path like so : include_once "/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0/common/base.php"; ,where ColoredLists_v1.0 is the directory with all the website files in it. I tried solutions like prepending _SERVER[DOCUMENT_ROOT] or dirname(File) to the file paths, but that didn't work as the variables were not set. Is there any easy way to do this, or a configuration I can change so that it looks in a specific directory by default instead of looking at the drive root? Currently, echo_include_path shows .: When I include this line at the start of the script, it works: set_include_path('/Volumes/Macintosh HD/Users/neil/Desktop/Website/ColoredLists_v1.0'); However, if I want to do this for all my scripts, I can't seem to make the change permanent. Even after I edited the Unix include_path in my php.ini, it doesn't seem to work.

    Read the article

  • Program freezing when syncing a ldap database (100+ entries added)

    - by djerry
    Hey guys, I'm updating a ldap database. I need to add a list of users to the db. I've written a simple foreach loop. There are about 180 users i need to add, but at the 128th user, the program freezes. I know ldap is really used for querying (fast), and that adding and modifying entries will not go as smooth as a search query, but is it normal that the program freezes while doing this? I'll post some code just in case. public static void SyncLDAPWithMySql(Novell.Directory.Ldap.LdapConnection _conn) { List<User> users = GetUsers(); int iteller = 0; foreach (User user in users) { if (!UserAlreadyInLdap(user, _conn)) { TelUser teluser = new TelUser(); teluser.Telephone = user.E164; teluser.Uid = user.E164; teluser.Company = "/"; teluser.Dn = ""; teluser.Name = "/"; teluser.DisplayName = "/"; teluser.FirstName = "/"; TelephoneDA.InsertUser(_conn, teluser ); } Console.WriteLine(iteller + " : " + user.E164); iteller++; } } private static bool UserAlreadyInLdap(User user, Novell.Directory.Ldap.LdapConnection _conn) { List<TelUser> users = TelephoneDA.GetAllEntries(_conn); foreach (TelUser teluser in users) { if (teluser.Telephone.Equals(user.E164)) return true; } return false; } public static int InsertUser(LdapConnection conn, TelUser user) { int iResponse = IsTelNumberUnique(conn, user.Dn, user.Telephone); if (iResponse == 0) { LdapAttributeSet attrSet = MakeAttSet(user); string dnForPhonebook = configurationManager.AppSettings.Get("phonebookDn"); LdapEntry ent = new LdapEntry("uid=" + user.Uid + "," + dnforPhonebook, attrSet); try { conn.Add(ent); } catch (Exception ex) { Console.WriteLine(ex.Message); } } return iResponse; } Am i adding too many entries at a time??? Thanks in advance.

    Read the article

  • How can I get git to work with a remote server?

    - by Adrienne
    I am the CM person for a small company that just started using Git. We have two Git repositories currently hosted on a Windows box that is our all-purpose Windows server. But, we just set up a dedicated server for our CM software on an Ubuntu Linux server named "Callisto". So I created a test Git repository on Callisto. I gave its directory all of the proper permissions recursively. I had the sysadmin create a login for me on Callisto, and I created a key to use for logging in via SSH. I set up my key to use a passphrase; I don't know if that could be contributing to my problems? Anyway, I know my SSH login works because I tested it through puTTY. But, even after hours of trials and head scratching, I can't get my Windows Git bash (mSysGit) to talk to Callisto for the purposes of pushing or pulling Callisto's git repository files. I keep getting "Fatal error. The remote end hung up unexpectedly." And I've even gotten the error that Git doesn't recognize the test repository on Callisto as a git repository. I read online that the "Fatal error...hung up unexpectedly" is usually a problem with the server connection or permissions. So what am I missing or overlooking here? And why doesn't a pull using the git:// protocol work, since that only uses read-only access? Group and public permissions for the git repository's directory on Callisto are set to read and execute, but not write. If anyone could help, I would be so grateful. Thank you.

    Read the article

  • How do I recover drivers from other hard disk

    - by Carl
    The drivers for a Cardbus (PCMCIA) card that gives me 2 USB 2.0 ports are on the hard disk from my old laptop. I have lost the driver CD. I have a way to get files from that other hard disk. Which files do I need? The drivers for the card used to be on the following website - the information is still there, except the download links don't work: http://www.ht-link.com/en/DownView.asp?ID=10 - The drivers I need are the first listing - The Win XP drivers for the HT-112NEC. My e-mails to them have not been answered. The information on this card is here http://www.ht-link.com/en/ProductView.asp?ID=106 I already tried connecting that other drive to my new laptop (via USB) and adding the drive to the search criteria when selecting update driver in the Device Manager. It says there isn't a better match, and if I select manual the matching device is not listed. (I don't think "manual" sees drivers on the external hard disk - but only ones on the main drive and/or found listed in the registry.) I would try 'have disk' if I knew exactly what file to point to on the external drive. The drivers are on that hard disk - I installed them there, and used that card on that computer. The new laptop has Windows XP Pro SP3, the old one had Pro SP2 Thanks for any help.

    Read the article

  • Application runs fine when executed directly, fails as scheduled task (security issues)

    - by Carl
    I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • How to structure this Symfony web project?

    - by James William
    I am new to Symfony and am not sure how to best structure my web project. The solution must accommodate 3 use cases: Public access to www.mydomain.com for general use Member only access to member.mydomain.com Administrator access to admin.mydomain.com All three virtual hosts point to the Symfony /web directory Questions: Is this 3 separate applications in my Symfony project (e.g. "frontend", "backend" and "admin" or "public", "member", "admin")? Is this a good approach if there is to be some duplicate code (e.g. generating a member list would be common across all 3 applications, but presented differently)? How would I route to the various applications based on the subdomain when a user accesses *.mydomain.com? Where in Symfony should this routing logic be placed? Or, is this one application with modules for each of the above use cases? EDIT: I do not have access to httpd.conf in apache to specify a default page for virtual hosts. I can only specify a directory for each subdomain using the hostin provider's cPanel.

    Read the article

  • Why does my ftp(e)s server fails like half of the time

    - by user1092608
    I have this discussion at work regarding our ftp server running via vsftpd. Initially, we have opted to serve ftpes instead of sftp because this seemed the most flexible and straightforward solution for our server to have secure file transmission. Afterwards, our ftp server seems to be a source of issues for our end users. Half of the time, users complain about not working ftp connections. I must say, i tested our FTP trough different infrastructures (=in the field, at random times at random places) and indeed, sometimes behind some configurations (=no idea how they are configured, because the 'field' testing), i recieve errors. Some of the are: Error: Failed to retrieve directory listing (filezilla) Furthermore, behind my basic home configuration, everything seems to be running fine. I (think I) did all the basic configuration checks (passive mode?, firewall for all ports?, ...) and can't seem to find the source. Being a bunch of techies at our small office, yet knowing nothing about infrastructure, some start suggesting that ftps protocol could be the source of issues. ("No, i only knew sftp so far" "Ftps is not widespread"). I, however, strongly doubt this hypothesis, since reading around on the www, asking questions on serverfault, everyone seems to deny this. So, as I would like to avoid reconfiguring, since this involves messing around in our SSH service, our virtual user setup and ftp service, i would need some advice on 1) what could be potentially the general cause? 2) do you have some general tips? 3) would you mind having a look at my configuration file? ----- General Settings ----- write_enable=YES dirmessage_enable=YES nopriv_user=ftpsecure ftpd_banner="Welcome to XXXX FTP!" hide_ids=YES hide_file=.* max_per_ip=10 max_clients=10 local_enable=YES local_umask=022 chroot_local_user=YES secure_chroot_dir=/usr/share/empty userlist_enable=NO userlist_deny=YES userlist_file=/etc/vsftp_deny_users guest_enable=YES guest_username=ftpvirtual virtual_use_local_privs=YES user_sub_token=$USER local_root=/srv/ftp/ftpvirtual/$USER anonymous_enable=NO syslog_enable=NO xferlog_enable=YES xferlog_file=/var/log/vsftpd_xfer.log connect_from_port_20=YES pam_service_name=vsftpd listen=YES listen_port=21 pasv_enable=YES pasv_min_port=30000 pasv_max_port=30030 pasv_address=foo ssl_enable=YES rsa_cert_file=/etc/vsftpd.pem rsa_private_key_file=/etc/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES ssl_ciphers=HIGH anon_mkdir_write_enable=NO anon_root=/srv/ftp anon_upload_enable=NO idle_session_timeout=900 log_ftp_protocol=NO dsa_cert_file=/etc/vsftpd.pem Thanks

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >