Search Results

Search found 26517 results on 1061 pages for 'large directory'.

Page 256/1061 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • RewriteRule causes POST data to get dumped before I can access it

    - by MatthewMcGovern
    I'm currently setting up my own 'webserver' (a Ubuntu Server on some old hardware) so I can have a mess around with PHP and get some experience managing a server. I'm using my own little MVC framework and I've hit a snag... In order for all requests to make it through the dispatcher, I am using: <Directory /var/www/> RewriteEngine On RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [L] </Directory> Which works great. I read on Stackoverflow to change the [L] to [P] to preserve post data. However, this causes every page to return: Not Found The requested URL <url> was not found on this server. So after some more searching, I found, "Note that you need to enable the proxy module, and the proxy_http_module in the config files for this to work." The problem is, I have no idea how to do this and everything I google has people using examples with virtual hosts and I don't know how to 'translate' that into something useful for my setup. I'm accessing my webserver via my public IP and forwarding traffic on port 80 to the web server (like I'm pretending I have a domain/server). How can I get this enabled/get post data working again? Edit: When I use the following, the server never responds and the page loads indefinately? LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so <Directory /var/www/> RewriteEngine On RewriteCond %{HTTP_REFERER} !^http://(.+\.)?82\.6\.150\.51/ [NC] RewriteRule .*\.(jpe?g|gif|bmp|png|jpg)$ /no-hotlink.png [L] RewriteCond %{REQUEST_URI} !\.(png|jpg|jpeg|bmp|gif|css|js)$ [NC] RewriteRule . HomeProjects/index.php [P] </Directory>

    Read the article

  • Ways of file copy

    - by Tim
    I sometimes found that when using simple right-click and copy-and-paste, some files/directories are not copied completely or not at all, because of various reasons, such as some saved webpage files/directories have some strange characters in their names or their names are too long. For example, in Windows 7, I save this webpage http://www.howtogeek.com/howto/windows-vista/working-around-windows-vistas-shrink-volume-inadequacy-problems/ completely in a very deep directories whose parent directories may have long names, I cannot copy its top ancestry directory, as Windows complains the filename for the saved webpage directory is too long. In Ubuntu, sometimes I can save a file with some special character such as newline under some directory. But when I copy that directory, it will say the file name has some special character and I will have to manually remove the character. Such cases are complained in both Windows and Ubuntu. I was wondering what some better ways to accomplish the copy job in both Windows and Ubuntu. For example, will archiving all to be copied into a single archive help? If yes how to do that? Thanks and regards!

    Read the article

  • shell pipe behavior with MySQLDump

    - by unknown (google)
    I am using mysqldump for a large database (several GB) and import the result from a pipe, please see commands below, does it do incremental pipe, or wait until the first one finishes then import? is this a good way of importing large db across servers? I know you can export gz it, then pscp it then import. Quick alternative are welcome mysqldump -u root -ppass -q mydatabase | mysql -u root -ppass --host=xxx.xx.xxx.xx --port=3306 -C mydatabase

    Read the article

  • Apache Virtual Host Issue

    - by Nik
    I think I hate Apache now, but on with the issue. It might be a configuration error on my end or just my inability to see what's right in front of me, but I'm trying to configure a sub-domain in Apache and no matter what, it always redirects the sub-domain to the web root of the main domain. My configuration is posted below (and yes, the domain name information was purposefully modified): <VirtualHost *> DocumentRoot /var/www/root/ ServerName example.com <Directory /var/www/root/> allow from all Options +Indexes </Directory> </VirtualHost> <Directory /usr/share/squirrelmail> Options Indexes FollowSymLinks <IfModule mod_php5.c> php_flag register_globals off </IfModule> <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> # access to configtest is limited by default to prevent information leak <Files configtest.php> order deny,allow deny from all allow from 127.0.0.1 </Files> </Directory> # users will prefer a simple URL like http://webmail.example.com <VirtualHost *> DocumentRoot /usr/share/squirrelmail/ ServerName squirrelmail.example.com </VirtualHost>

    Read the article

  • GIT and Django Projects

    - by Garfonzo
    I have two servers, a Dev server and a Production server. The Production server runs a live Django site, while the Dev server has a copy of the Django project. I use the Dev server to work on the Django site, make improvements, fix bugs, etc. Once I am satisfied with how the Dev version is working, I move the whole Django directory from the Dev server and replace the same directory on the Production server. The two servers are not on the same LAN so the process is not straight forward. There are a few issues with this that I am having so far. Moving the whole directory is laborious and time consuming If I only change a few files, it is even move tedious to replace a few files than the whole directory since the project is getting fairly large and I worry that I'll miss something I often run into permission issues after I've moved things It's super inefficient, and, due to lack of time, I haven't bothered figuring out a new method. Now it's just getting out of hand and i need to address the situation. I am thinking I need to move to a GIT repository for this process. But my question is how would I set this all up? Do I host the repository on the Production server, pull from the Dev server, do work, then commit? Then I would pull from the Production server (same server the repo is hosted on) to run the current working version? Do I host the repo on the Dev Server, pulling from the same server to do work on the repo, then pull a working version onto the Production server? Should I be hosting the repo on a different server than the Production server and the Dev server (a third server)? Are there any special considerations with Django and repos that I need to worry about? Thanks for the help :)

    Read the article

  • mod_wsgi -apache configuration file

    - by Kevin
    guys sorry I'm a newbie to this but I've been following the mod_wsgi configuration tutorial and it's very spotty. In my httpd.conf file I add the virtual host like so: 'Main' server configuration # The directives in this section set up the values used by the 'main' server, which responds to any requests that aren't handled by a definition. These values also provide defaults for any containers you may define later in the file. # All of these directives may appear inside containers, in which case these default settings will be overridden for the virtual host being defined. # ServerName wsgihost DocumentRoot "/Library/WebServer/Documents" <Directory "/Library/WebServer/Documents"> Order allow,deny Allow from all </Directory> WSGIScriptAlias /myapp /Users/KL/modwsgi/env/myapp.wsgi <Directory "/Users/KL/modwsgi/env"> <Files myapp.wsgi> Order allow,deny Allow from all </Files> </Directory> Now, when I also added in my local host the following: 127.0.1.1 wsgihost but I can't seem to connect. Am I doing something terribly wrong?

    Read the article

  • Tips for Domain Name Management?

    - by bofe
    Expired domain names = downtime for websites. Downtime = bad. How does your organization make sure domain names have been renewed? I believe ICANN requires registrars to give a notice at 60 days and 30 days, but these can easily get ignored -- especially with a large amount of domains. Does your solution work for a large amount of domain names? ( 100) Is it registrar specific?

    Read the article

  • redmine multitheaded

    - by Alex
    Our redmine server is not responding due to connecting it to a large repository. It has not crashed but it's just busy until it checks it out, or whatever redmine does when you set a new repo for a project. What is surprisning is that this operation is not running int the background but blocking the server. Is there any way to have redmine to this in the background next time we connect a large repo? Thanks

    Read the article

  • Problems with Program startup in WIn 7 this week

    - by PyNEwbie
    I have a program (ISYS) that I have been using since 2006. It migrated successfully to Windows 7. Just yesterday it started manifesting this strange behavior. The program has a selector to allow you to select a directory that might have relevant files for the program. As of yesterday the program will only let me select folders on the C drive. For example I have a folder on D that I need to access with this program - from the GUI when I select the D drive the directory list does not load, instead it churns away using 17% of the CPU cycles. I have let it run for an hour several times. I have found that I can get the directory I want by using a batch file to start the program but this limits my ability to do certain things I really need to use the GUI. I did a number of reboots and other tests - I disconnected drives but once I try to select some directory on a drive other than C it churns away. I have experimented quite a bit and am convinced (which means I am wrong) that this has something to do with some setting change on my computer that I can't figure out as I don't see any updates. Since ISYS has not been updated I feel confident it is something internal. Any suggestions would be appreciated.

    Read the article

  • SSL and regular VHost on the same server [duplicate]

    - by Pascal Boutin
    This question already has an answer here: How to stop HTTPS requests for non-ssl-enabled virtual hosts from going to the first ssl-enabled virtualhost (Apache-SNI) 1 answer I have a server running Apache 2.4 on which run several virtual hosts. The problem I noticed is that if I try to access let's say https://example.com that have no SSL setuped, apache will automatically try to access the first VHost that has SSL activated (which is litteraly not the same site). How can we prevent this strange behaviour, or in other words, how to say to Apache to ignore SSL for a given site. Here's sample of what my .conf files look like : <VirtualHost foobar.com:80> DocumentRoot /somepath/foobar.com <Directory /somepath/foobar.com> Options -Indexes Require all granted DirectoryIndex index.php AllowOverride All </Directory> ServerName foobar.com ServerAlias www.foobar.com </VirtualHost> <VirtualHost test.example.com:443> DocumentRoot /somepath/ <Directory /somepath/> Options -Indexes Require all granted AllowOverride All </Directory> ServerName test.example.com SSLEngine on SSLCertificateFile [­...] SSLCertificateKeyFile [­...] SSLCertificateChainFile [­...] </VirtualHost> With this, if I try to access https://foobar.com chrome will show me a SSL error that mention that the server was identifying itself as test.example.org Thanks in advance !

    Read the article

  • schroot build environment setup how to avoid bind-mount home

    - by minghua
    The recent linux distributions such as Fedora and Ubuntu all use chroot environment to make the build. Because when making the build often it needs to install some special tools, and to install to the existing system. Using chroot avoids making any changes to the host system. To set up such a build environment, the first step is to make a chroot. I'm following the setup guide at https://wiki.debian.org/Schroot [wheezy-test] description=Contains the SPICE program aliases=test type=directory directory=/srv/chroot/test users=jsmith root-groups=root script-config=desktop/config personality=linux preserve-environment=true In the host on my setup the /home is on /dev/mapper. When schroot is entered, the same home is bind-mounted. Is there a way to avoid this? I prefer to use a different /home inside chroot. When changing the type from directory to plain, the binding is not performed. However that also loses /proc, /sys, etc. You'd have to manually bind-mount them. That does not seem to be a good solution. If a simple configuration change is unavailable, any idea where the script is for type=directory? Probably I'll manually modify the script. Thanks in advance for any answers or hints!

    Read the article

  • Maximum number of memory segments that Notes can support has been exceeded

    - by Sagy
    hi All, I am using Domino.dll to access a NSF file in C#.NET 2.0 I am using multiple thread to access 4 NSF files at a time, its working fine for small NSF files, but if i try to access large NSF files i get the Out of Memory Exception and Maximum number of memory segments that Notes can support has been exceeded. This exception usually occurs when i access NotesDocument object from a large NSFVIewFolder in a while loop. I am releasing the instance of the NotesDocument by using the Marshal.ReleaseComObject(NotesDocument); still it throws the same exception. My goal is to access multiple NSF files at a time (MAX 4 NSF files at a time) for large NSF files (may be in GB). Kindly help me, if you got some solution. Thanks.

    Read the article

  • ItemsControl.ItemsSource MVVM performance

    - by bitbonk
    I have an (non-virtualized) ItemsControl that binds its ItemsSource to a ObeservableCollection of ViewModel instances. Now once the large amount Model instances is loaded all the ViewModel complemnents needs to be added to that ObservableCollection. How can I add a large amount of ViewModels without making the UI Thread hang? I suppose the UI Thread hangs because each time a new item is added the ItemsControl needs to update itself and does layout etc. over and over again. Should I suspend the binding add all items and then resume? If so, how? Should I override the ObservableCollection to implement an AddRange so only 1 CollectionChanged Event is fired for adding multiple items? Or alternatively just replace the whole collection? Or is it better to add each items separately and call Dispatcher.Invoke for each item separately? So I would unblock frequently. How do you handle large dynamic lists that can not be virtualized?

    Read the article

  • ItemsControl.ItemsSource MVVM perormance

    - by bitbonk
    I have an (non-virtualized) ItemsControl that binds its ItemsSource to a ObeservableCollection of ViewModel instances. Now once the large amount Model instances is loaded all the ViewModel complemnents needs to be added to that ObservableCollection. How can I add a large amount of ViewModels without making the UI Thread hang? I suppose the UI Thread hangs because each time a new item is added the ItemsControl needs to update itself and does layout etc. over and over again. Should I suspend the binding add all items and then resume? If so, how? Should I override the ObservableCollection to implement an AddRange so only 1 CollectionChanged Event is fired for adding multiple items? Or alternatively just replace the whole collection? Or is it better to add each items separately and call Dispatcher.Invoke for each item separately? So I would unblock frequently. How do you handle large dynamic lists that can not be virtualized?

    Read the article

  • Style Switcher & Text Resizer Combined?

    - by Stephen
    Hi there, I've came across various style switchers that allow you to change the stylesheet (i.e. Light, Dark, High Contrast), and carious text-resizers that allow you to resize the test (usually with Three A's, small, medium and large). However, I can't seem to find a single switcher/resizer that works well together by allowing permutations of the two. i.e. so the user can choose a dark background with small text, or a dark background with large text, etc. I can only seem to get this working where the user can choose one or the other styles (large text or High Contrast, not a combination of the two). Any ideas on anything that may be suitable for this at all? Thanks, Stephen

    Read the article

  • MYSQL Fast Insert dependent on flag from a seperate table

    - by Stuart P
    Hi all. For work I'm dealing with a large database (160 million + rows a year, 10 years of data) and have a quandary; A large percentage of the data we upload is null data and I'd like to stop it from being uploaded. The data in question is spatial in nature, so I have one table like so: idLocations (Auto-increment int, PK) X (float) Y (foat) Alwaysignore (Bool) Which is used as a reference in a second table like so: idLocations (Int, PK, "FK") idDates (Int, PK, "FK") DATA1 (float) DATA2 (float) ... DATA7 (float) So, Ideally I'd like to find a method where I can do something like: INSERT INTO tblData(idLocations, idDates, DATA1, ..., DATA7) VALUES (...), ..., (...) WHERE VALUES(idLocations) NOT LIKE (SELECT FROM tblLocation WHERE alwaysignore=TRUE ON DUPLICATE KEY UPDATE DATA1=VALUES(DATA1) So, for my large batch of input data (250 values in a block), ignore the inserts where the idLocations matches an idLocations values flagged with alwaysignore. Anyone have any suggestions? Cheers. -Stuart Other details: Running MySQL on a semi-dedicated machine, MyISAM engine for the tables.

    Read the article

  • Python - do big doc strings waste memory?

    - by orokusaki
    I understand that in Python a string is simply an expression and a string by itself would be garbage collected immediately upon return of control to a code's caller, but... Large class/method doc strings in your code: do they waste memory by building the string objects up? Module level doc strings: are they stored infinitely by the interpreter? Does this even matter? My only concern came from the idea that if I'm using a large framework like Django, or multiple large open source libraries, they tend to be very well documented with potentially multiple megabytes of text. In these cases are the doc strings loaded into memory for code that's used along the way, and then kept there, or is it collected immediately like normal strings?

    Read the article

  • How to transfer Netbeans Project into Eclipse?

    - by Yatendra Goel
    I have been using Netbeans for my java desktop application since few months. Now in the middle of the project, I want to switch over to Eclipse as the Netbeans once corrupted my GUI and I had to re-create several parts of the GUI and now it is displaying a compiler error as code too large private void initComponents() { 1 error "code too large" is a strange error. My code which it is saying too large is just 10,000 lines long. I came to know first time that we couldn't develop long code in Netbeans :) So instead of going into detail, I want to switch to Eclipse. I have never used it before. So could please tell me how to import my incompleted Netbeans project into eclipse.

    Read the article

  • jQuery Custom Gallery and jCarousel problem

    - by steve
    Active site can be seen here: http://www.studioimbrue.com/index2.php There are currently two small problems with the coding. First: when the page loads and you attempt to click on one of the large images to advance, nothing happens. Once a thumbnail is clicked, the click functionality of the large image comes available. I'm trying to fix it so when the page loads, the user can just start clicking the large image. Second: when an image is clicked, the thumbnail highlight changes. The only problem there is once it gets past 4, the "current" thumbnail needs to be seen, thus the carousel should go to that one. Right now the code for that is nextThumb.closest('.thumbscontainer').jcarousel('next'); but that makes it scroll every time you click. Thanks for any help

    Read the article

  • HTTP vs FTP upload

    - by Richard Knop
    I am building a large website where members will be allowed to upload content (images, videos) up to 20MB of size (maybe a little less like 15MB, we haven't settled on a final upload limit yet but it will be somewhere between 10-25MB). My question is, should I go with HTTP or FTP upload in this case. Bear in mind that 80-90% of uploads will be smaller size like cca 1-3MB but from time to time some members will also want to upload large files (10MB+). Is HTTP uploading reliable enough for such large files or should I go with FTP? Is there a noticeable speed difference between HTTP and FTP while uploading files? I am asking because I'm using Zend Framework which already has HTTP adapter for file uploads, in case I choose FTP I would have to write my own adapter for it. Thanks!

    Read the article

  • How to know preferred icon size for MenuItem?

    - by barmaley
    Hi folks, I my application I have one large PNG file containg hi-res image. Depending on situation I would like to use this image either as icon or as placeholder for ImageView. For MenuItem this image is too large, so I need to scale-down it to suitable size. I mean if it has to be displayed on large enough device like Samsung Galaxy Tab - I need to use one scale, for in small ones another, etc. I just noticed that for small-sized devices MenuItem icon is not scaled just cut - which is ugly. So the question is how should detect which is preferred size?

    Read the article

  • Favorite web host.

    - by Greg Hostetler
    I've used many over the years like Media Temple gs, dreamhost, slicehost, and some others that I don't care to remember. But it's pretty hard to find a new host with search engines, because they normally give you those crappy affiliate driven reviews sites. Which host would you use for: Small personal websites with small traffic. Medium to large websites/applications with medium to large traffic. What host would you use for your assets (large images, media, etc...). Favorite dedicated/vps host.

    Read the article

  • Efficient mass string search problem.

    - by Monomer
    The Problem: A large static list of strings is provided. A pattern string comprised of data and wildcard elements (* and ?). The idea is to return all the strings that match the pattern - simple enough. Current Solution: I'm currently using a linear approach of scanning the large list and globbing each entry against the pattern. My Question: Are there any suitable data structures that I can store the large list into such that the search's complexity is less than O(n)? Perhaps something akin to a suffix-trie? I've also considered using bi- and tri-grams in a hashtable, but the logic required in evaluating a match based on a merge of the list of words returned and the pattern is a nightmare, and I'm not convinced its the correct approach.

    Read the article

  • Perl, efficient parsing of csv file

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although that might work effectively.

    Read the article

  • How to ensure that a Serialized object is completely read over a Socket ?

    - by Amitd
    Hi guys, I am trying to write a socket server and Client that communicate with each other via serialized Objects.eg:To Login, Client sends server a serialized Login object and then the server deserialises the object to read login details. Similarly for other types of request/response. I just wanted to be sure if Socket.receive() can read(untill) large serialized objects completely. I have tried this code but seems to fail when a large object is serialised and sent over the internet.(seems to work fine in LAN situations.) http://stackoverflow.com/questions/2134356/sending-large-serialized-objects-over-sockets-is-failing-only-when-trying-to-grow using(MemoryStream ms = new MemoryStream()) { int bytesRead; while((bytesRead = m_socClient.Receive(buffer)) > 0) { ms.Write(buffer, 0, bytesRead); } // access ms.ToArray() or ms.GetBuffer() as desired, or // set Position to 0 and read } Are there any other ways to ensure that the object gets completely read.

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >