Search Results

Search found 25149 results on 1006 pages for 'test automation'.

Page 314/1006 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • How to block a user in apache httpd server from accessing a *.php file inside a Directory, instead user should access this using Directory name

    - by Oxi
    My requirement looks Simple, But Googling Did not help me yet. my query is i want to Throw a 404 page to a user(Not Re-Direct to another folder or file), who is trying to Access *.php files in my website ex: when a client asks for www.example.com/home/ i want to show the content , but when user says www.example.com/home/index.php i want to show a 404 page. i tried different methods, nothing worked for me, one of which tried is shown below <Directory "C:/xampp/htdocs/*"> <FilesMatch "^\.php"> Order Deny,Allow Deny from all ErrorDocument 403 /test/404/ ErrorDocument 404 /test/404/ </FilesMatch> </Directory> Thanks in Advance

    Read the article

  • SMART says disk failure is imminent due to bad blocks, what do I need to do?

    - by flix
    I have on my hard drive 2 OSes: Ubuntu 12.04 and Windows Vista (I keep it just because of school). Everything was OK on both OSes, but one day on Ubuntu I was getting awkward noises from my notebooks' hard drive and then everything stopped and I couldn't do anything. On Windows everything was OK. Every time I boot Ubuntu I can get 5 minutes normal run time, without problems. After that the hard drive sounds crazy and nothing works. I could run S.M.A.R.T tests from a older Ubuntu CD (10.04) from the GUI (Disk Utility, or something like that and from terminal). From the GUI, I got that the DISK FAILURE IS IMMINENT and I have ~700 bad blocks (or broken blocks, I had that test I while ago) on my HDD. From the terminal (I don't remember if it was fsck or a SMART test command) I got that the HDD will fail in under 24 hours. Since then it passed 2-3 weeks. I've tried "badblocks" but after 10 hours it was still running and I had to stop it. Now I have to use cygwin and other alternatives for my Linux apps on Windows. How can I separate the bad blocks from Ubuntu so it wouldn't use them? Please help.

    Read the article

  • Is there a way to automatically disconnect a notebook from the eletrical power-supply?

    - by Diogo
    I know this looks like weird and useless, but let me explain... I'm running Windows Assessment and Deployment Kit (Windows ADK) to make some tests on Windows 8 Preview. One of it's assessment is the "Battery Run Down Test", which tests battery consumption with some procesor load. I'm trying to "automate" in some way this test, I mean, I wish to execute it without any human intervention (such as manually disconecting the eletric power source to leave my notebook running only from batteries to run this assessment). So, there is some ACPI API, Windows API or even an easy bat shell/VBScript/Powershell command to do this? Does someone already made something like? PS: I'm asking this because I couldn't found any answer, but maybe someone here would have any tip...

    Read the article

  • Groovy Refactoring in NetBeans

    - by Martin Janicek
    Hi guys, during the NetBeans 7.3 feature development, I spend quite a lot of time trying to get some basic Groovy refactoring to the game. I've implemented find usages and rename refactoring for some basic constructs (class types, fields, properties, variables and methods). It's certainly not perfect and it will definitely need a lot fixes and improvements to get it hundred percent reliable, but I need to start somehow :) I would like to ask all of you to test it as much as possible and file a new tickets to the cases where it doesn't work as expected (e.g. some occurrences which should be in usages isn't there etc.) ..it's really important for me because I don't have real Groovy project and thus I can test only some simple cases. I can promise, that with your help we can make it really useful for the next release. Also please be aware that the current version is focusing only on the .groovy files. That means it won't find any usages from the .java files (and the same applies for finding usages from java files - it won't find any groovy usages). I know it's not ideal, but as I said.. we have to start somehow and it wasn't possible to make it all-in-one, so only other option was to wait for the NetBeans 7.4. I'll focus on better Java-Groovy integration in the next release (not only in refactoring, but also in navigation, code completion etc.) BTW: I've created a new component with surprising name "Refactoring" in our bugzilla[1], so please put the reported issues into this category. [1] http://netbeans.org/bugzilla/buglist.cgi?product=groovy;component=Refactoring

    Read the article

  • Searching for entity awareness in 3D space algorithm and data structure

    - by Khanser
    I'm trying to do some huge AI system just for the fun and I've come to this problem. How can I let the AI entities know about each other without getting the CPU to perform redundant and costly work? Every entity has a spatial awareness zone and it has to know what's inside when it has to decide what to do. First thoughts, for every entity test if the other entities are inside the first's reach. Ok, so it was the first try and yep, that is redundant and costly. We are working with real time AI over 10000+ entities so this is not a solution. Second try, calculate some grid over the awareness zone of every entity and test whether in this zones are entities (we are working with 3D entities with float x,y,z location coordinates) testing every point in the grid with the indexed-by-coordinate entities. Well, I don't like this because is also costly, but not as the first one. Third, create some multi linked lists over the x's and y's indexed positions of the entities so when we search for an interval between x,y and z,w positions (this interval defines the square over the spatial awareness zone) over the multi linked list, we won't have 'voids'. This has the problem of finding the nearest proximity value if there isn't one at the position where we start the search. I'm not convinced with any of the ideas so I'm searching for some enlightening. Do you people have any better ideas?

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • When module calling gets ugly

    - by Pete
    Has this ever happened to you? You've got a suite of well designed, single-responsibility modules, covered by unit tests. In any higher-level function you code, you are (95% of the code) simply taking output from one module and passing it as input to the next. Then, you notice this higher-level function has turned into a 100+ line script with multiple responsibilities. Here is the problem. It is difficult (impossible) to test that script. At least, it seems so. Do you agree? In my current project, all of the bugs came from this script. Further detail: each script represents a unique solution, or algorithm, formed by using different modules in different ways. Question: how can you remedy this situation? Knee-jerk answer: break the script up into single-responsibility modules. Comment on knee-jerk answer: it already is! Best answer I can come up with so far: create higher-level connector objects which "wire" modules together in particular ways (take output from one module, feed it as input to another module). Thus if our script was: FooInput fooIn = new FooInput(1, 2); FooOutput fooOutput = fooModule(fooIn); Double runtimevalue = getsomething(fooOutput.whatever); BarInput barIn = new BarInput( runtimevalue, fooOutput.someOtherValue); BarOutput barOut = barModule(BarIn); It would become with a connector: FooBarConnectionAlgo fooBarConnector = new fooBarConnector(fooModule, barModule); FooInput fooIn = new FooInput(1, 2); BarOutput barOut = fooBarConnector(fooIn); So the advantage is, besides hiding some code and making things clearer, we can test FooBarConnectionAlgo. I'm sure this situation comes up a lot. What do you do?

    Read the article

  • NginX : Route user request to backend

    - by xperator
    The goal is to have NginX webserver act as a very basic & simple load balancer/fail-over. But instead of fetching static files from backend and serving it to user, I just want to route/redirect user request to one of the back end servers. upstream backend { server server1.example.com:80; server server2.example.com:80; server server3.example.com:80; } location / { proxy_pass http://backend; } Instead of : User request (example.com/test.file) NginX LB Backend NginX LB User I want to have : User request (example.com/test.file) NginX LB Backend User Is this even possible with NginX ? If not then How can I achieve this goal. UPDATE 1: Is there a way to use rewrite directive with backend upstream ? UPDATE 2: It's not really necessary to use NginX. I just want to have a direct reply from backend to user.

    Read the article

  • [SOLVED] vmware problems - networking - no packet response

    - by jack
    XP is my host. Ubuntu is my Guest in VMware. When I do the following commands, I should get SMTP respones but now get no response. I use wireshark to analayze it. Also in wireshark shows nothing. root@vmware:~# netcat 192.168.1.2 25 220 762462a8c4d Microsoft ESMTP MAIL Service, Version: 6.0.2600.5949 ready at Fri, 12 May 2010 18:04:20 +0800 EHLO SAYHELLO VRFY TEST@LOCALHOST test \ sdfsafsd How can I fix it? UPDATE: I came to know that this is no VMWare problem. This is Netcat problem. For this, you might have to type Ctrl+M {ENTER} {ENTER}

    Read the article

  • Speakers will not work after I use USB Headset

    - by Josh K
    I am trying to configure my SteelSeries Siberia V2 Frost USB Headset to work with my 2.0 speakers using a jack. My goal is to find a easy way (no restart) to switch playback from my headset and my speakers and vice-versa. If i plug my headset in and make it the default device then restart my application/web page then the sounds works out of headset. If I switch the default to my speakers and restart apps/web pages then sound does not play. I know my speakers are on because if I configure them through windows and test, the sounds play, and sounds also play when I test it through my audio manager. Even if I unplug my headset, I still cannot get sound out of my speakers unless I restart My audio manager is RealTek HD Audio Manager, Windows 7 x64. I have tried the speaker back, usb front. speaker front, usb in front port. I have not tried speaker back, usb back.

    Read the article

  • Loading guest OS's (Windows) localhost through my host's (Mountain Lion) browsers

    - by Jonah Goldstein
    For work, I have to develop in Visual Studio, which I run via VMware's fusion 5. I really want to test via my mac's native browsers for a multitude of reasons. that is, view the IIs web stuffs that my windows VM should expose, in my mac's own native Firefox, Chrome... etc. if i could expose a pretty url, that would be even better, but i would certainly settle for an ugly IP :) I got a decent number of views but no response when I asked in VMware's own boards. Everyone seems to want to go the other direction (developing in sublimetext/textmate serving up through MAMP and exposing it to windows browsers to test) and there seems to be tried a true solutions for this. unfortunately (or fortunately depending on your preference) my startup is pretty entrenched in the visual studio development tools. I'm really hoping that someone knows the answer to this. Thanks :)

    Read the article

  • Weblogic Threads Usage

    - by Hila
    I have an application deployed on WebLogic 10.3, which exhibits a strange behavior. I am running a constant (not too high) load on my application (20 concurrent users, running a light activity). The response time is reasonable (well below 100ms after the application stabilizes) Memory consumption seems fine (My application creates a lot of short-living objects, but they are garbaged collected so the overall memory consumption stays under 500 mb). Threads stats seem healthy as well: And yet, after I leave my test running for a while, more and more execute threads ("[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'") are created, until eventually the application crashes: This test hasn't been running for a long time (All the new threads that you don't see in the first screenshot were created while I was writing this question), and I've seen much more threads being created. Any idea why these threads are being created?

    Read the article

  • Cross-Forest Trust

    - by cdalley
    I am looking at testing a cross-domain trust we can have two domain controllers (with different forests and domain names) setup so we can move everyone onto the new domain. We do NOT run exchange on site and we do not have any links to O365 to AD currently. Onto the problem: I have setup two DCs in a Virtual Machine: They are on the same network 192.168.0.* The Windows 2003 server: Name: OLDSRVR "Clone" of our current Domain Controller IP: 192.168.0.1 Domain: internal.test.com The Windows 2012 server: Name: ADCTEST01 Brand new domain setup from scratch separate to internal.test.com Domain: internal.test2.com IP: 192.168.0.2 OLDSRVR can only see ADCTEST if it has dynamic IP set. If I set a static IP it cannot see it. If I try using the dynamic IP and try to join it gets to the end then complains "??The trust relationship between this workstation and the primary domain failed" Any ideas?

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

  • Nginx ignoring client's HTTP 1.0 request and respond by HTTP 1.1

    - by Yoga
    I am testing using nginx/php5-fpm, with the code <?php header($_SERVER["SERVER_PROTOCOL"]." 404 Not Found"); // also tested: header("Status: 404 Not Found"); echo $_SERVER["SERVER_PROTOCOL"]; And force to use HTTP 1.0 with the curl command. curl -0 -v 'http://www.example.com/test.php' > GET /test.php HTTP/1.0 < HTTP/1.1 404 Not Found < Server: nginx < Date: Sat, 27 Oct 2012 08:51:27 GMT < Content-Type: text/html < Connection: close < * Closing connection #0 HTTP/1.0 As you can see I am already requesting using HTTP 1.0, but nginx reply me with HTTP 1.1

    Read the article

  • PC doesn't power up anymore

    - by Andrew
    I will tell you the story behind the problem. The computer needed to be reinstalled, it has 2 HDDs so I had to un-plug one to see which is which, to save as much data as possible. After unplugging the first one, it booted with the reinstallation CD, it wasn't the HDD I was looking for so I turned it off and unplugged the other one. After unplugging and turning on, right after booting test, the HDD was turned off, only the CPU fan was working. Turned it off with the power supply's button help, holding the power button didn't do it. And now it doesn't want to turn on anymore. I tried with another test power supply but the result is the same, doesn't want to turn on. Any idea?

    Read the article

  • What is wrong with my expect script?

    - by Bryan
    I'm trying to learn how to use the expect command, to help me automate deployment of some software via shell scripts, and figured I start with something simple to get me started. I've created a file in my home dir called 'foo' using: touch foo And I've created the following script saved as test.exp #!/usr/bin/expect spawn rm -i foo expect "rm: remove regular empty file `foo'?" send "y\r" When I run the script using ./test.exp, it spawns the rm command, but it doesn't appear to send the Y and carriage return. I know I don't have a typo in the expect string, as I've used copy and paste to put in the script. What am I doing wrong?

    Read the article

  • nginx timeout albeit ridicolous configuration

    - by Joa Ebert
    The scenario is an API server that should handle uploads. Posting on my.host.com/api/upload should do something with the body the client sends. However the API server has been designed to block the whole request until it fully processed the file, including some analysis which can take up to approx. 5min (...!). This has to change of course. In the meantime I wanted to setup nginx as a load balancer in front of the API servers. I quickly ran into a timeout issue, consulted Google and came up with this ridiculous test configuration: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; access_log off; sendfile on; send_timeout 3600; keepalive_timeout 3600 120; tcp_nopush on; tcp_nodelay on; gzip off; client_header_timeout 3600; client_body_timeout 3600; proxy_send_timeout 3600; proxy_read_timeout 3600; proxy_connect_timeout 1800; proxy_next_upstream error; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And upstream test { server host1; server host2; } server { listen 80; server_name my.host.com; client_max_body_size 10m; location /api/ { proxy_pass http://test; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } Still, when an upload happens, I get the following result in the error.log: 2010/12/22 13:36:42 [error] 5256#0: *187359 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xx.xx.xx.xx, server: my.host.com, request: "POST /api/upload HTTP/1.1", upstream: "http://apiserver:80/upload", host: "my.host.com" What else could I do? If I look at the log of the API server I still see that it is processing the request and analyzing the file. But I think 3600 seconds as a timeout should be more than enough. This happens even after a could of seconds. And I did a reload and force-reload of the configuration as well of course.

    Read the article

  • Using pscp and getting permission denied

    - by Espen
    I'm using pscp to transfer files to a virtual ubuntu server using this command: pscp test.php user@server:/var/www/test.php and I get the error permission denied. If I try to transfer to the folder /home/user/ I have no problems. I guess this has to do with that the user I'm using doesn't have access to the folder /var/www/. When I use SSH I have to use sudo to get access to the /var/www/ path - and I do. Is it possible to specify that pscp should "sudo" transfers to the server so I can get access to the /var/www/ path and actually be able to transfer files to this folder?

    Read the article

  • How do i get Safari to ignore the SSL Certificate error?

    - by Tangopop
    In IE 6, 7, 8 and Firefox 3.6.3 and 3.0.5 i have installed a local SSL Certificate on the machine i am testing on and i have gotten the browser to igonre the SSL error (which is off one of my Web Test servers) Now i am tryin to do the same thing within safari 4 and with no luck. Basically i am running some automated scripts to test my website before they go live and i need to be able to ignore these errors as they will all run autonomosly. This is the error screen i am trying to avoid: http://library.bowdoin.edu/news/images/ezproxy-err/safari.jpg As i say i have installed the certificate locally and the IE 7 browser on the same machine works fine.

    Read the article

  • When writing tests for a Wordpress plugin, should i run them inside wordpress or in a normal browser?

    - by Nicola Peluchetti
    I have started using BDD for a wordpress plugin i'm working on and i'm rewriting the js codebase to do tests. I've encountered a few problems but i'm going steady now, i was wondering if i had the right approach, because i'm writing test that should pass in a normal browser environment and not inside wordpress. I choose to do this because i want my plugin to be totally indipendent from the wordpress environment, i'm using requirejs in a way that i don't expose any globals and i'm loading my version of jQuery that doesn't override the one that ships with Wordpress. In this way my plugin would work the same on every wordpress version and my code would not break if they cheange the jQuery version or someone use my plugin on an old wordpress version. I wonder if this is the right approach or if i should always test inside the environment i'm working in. Since wordpress implies some globals i had to write some function purely for testing purpose, like "get_ajax_url": function() { if( typeof window.ajaxurl === "undefined" ) { return "http://localhost/wordpress/wp-admin/admin-ajax.php"; } else { return window.ajaxurl; } }, but apart from that i got everything working right. What do you think?

    Read the article

  • What have you learned from the bugs you helped discover and fix?

    - by Ethel Evans
    I liked the core of this question, and wanted to re-ask it in a way that made it less about 'fun' and more about 'What do these past mistakes tell us about how we can write and test software better?' As an SDET, I'm always looking for anecdotes about new and interesting ways that programs can fail. I've learned a lot from these tales in the past, and would like to get that from the intelligent people in this community as well. I'd be interested in hearing what the issue was, how it was caught, if you think there was anything that could have reasonably done to catch it earlier or to avoid the same issue on later projects, and any other interesting lessons you took away from this bug. Please only write about bugs you personally were involved with, ideally on a project you worked on (e.g., no "10 years before I was born, this happened and it was FUNNY!" answers). Please vote up answers that are thought-provoking or could change how you develop or test in some way, so this isn't just 'social fun'. Try to avoid voting up something just because it was funny.

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • How do you manage updates without a staging environment: CentOS 6.3

    - by Gregg Leventhal
    I am managing about 20 servers, many of them virtual. They are almost all different purpose, and none are clustered. I have a distributed LAMP stack, a few application servers, some build servers, a few KVM hosts. They are CentOS 6.3 mostly with a few Ubuntu (unfortunately). I don't have the resources to setup a staging environment where I can have duplicates of my machines and test updates before rolling them out. I am taking file backups. What I want to know is how you are approaching backing up your Linux systems. I assume you don't just do yum update, but then how are you choosing the packages worthy of updating? When (if ever) are you updating the kernel, etc.. How do you test updates without a staging environment? Snapshot and hope for the best?

    Read the article

  • Using Apache Environment Variables to set custom ErrorDocument

    - by Tad
    I've got a set of RewriteCond rules that test for various mobile devices and then set environment variables like "env=device:.iphone" or "env=device:.smartphone" if the useragent matches an iPhone or Android device. I'm trying to now redirect the user to custom-styled 404/500 server error pages for each device, by way of the error pages. Ideally I'd like to be able to test for a variable being there, and then write in a custom ErrorDocument string. But an apache doesn't seem to work in this case. Any ideas how I can construct if/else tests in an apache conf file for environment vars?

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >