Search Results

Search found 74153 results on 2967 pages for 'test and set'.

Page 356/2967 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Why is my hosts file not working?

    - by elliot100
    I've been using the hosts file to for local website development, and it's recently stopped working. No entries other than localhost resolve. I've simplified to test, so it now contains only 127.0.0.1 localhost ::1 localhost 127.0.0.1 test.dev localhost responds to ping, test.dev does not. The file is called hosts with no extension It has no trailing spaces It's saved in C:\WINDOWS\System32\drivers\etc which matches the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DataBasePath Oddly, despite UAC being on, I can edit, delete and save the file without admin permissions No proxy is being used, PC is not connected to network for testing Stopping the DNS Client service seemed to resolve the issue for a few minutes, test.dev briefly resolved but doesn't any more. Only firewall is Windows' Machine has been restarted. Is there anything else I should try?

    Read the article

  • apache mod_proxy or mod_rewrite for hide a root of a webserver behind a path

    - by Giovanni Nervi
    I have 2 apache 2.2.21 one external and one internal, I need to map the internal apache behind a path in external apache, but I have some problems with absolute url. I tried these configurations: RewriteEngine on RewriteRule ^/externalpath/(.*)$ http://internal-apache.test.com/$1 [L,P,QSA] ProxyPassReverse /externalpath/ http://internal-apache.test.com/ or <Location /externalpath/> ProxyPass http://internal-apache.test.com/ ProxyPassReverse http://internal-apache.test.com/ </Location> My internal apache use absolute path for search resources as images, css and html and I can't change it now. Some suggestions? Thank you

    Read the article

  • Removing/modifying LDAP objectclasses/attributes using olc

    - by Foezjie
    I'm having trouble using openldap's olc to modify a schema without shutting down the server. To test some things out, I made the following schema: objectIdentifier tests orgUlyssisOID:4 objectIdentifier testAttribute tests:1 objectIdentifier testObjectClass tests:2 attributeType ( testAttribute:1 NAME 'attr1' DESC 'attribuut 1' SYNTAX '1.3.6.1.4.1.1466.115.121.1.40' ) attributeType ( testAttribute:2 NAME 'attr2' DESC 'attribuut 2' SUP userPassword SINGLE-VALUE ) objectclass ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST (attr1 $ attr2 ) ) And added it to a new schema called test. (cn={9}test.ldif in cn=schema). Now I can't seem to figure out how to delete class1 from that schema. I use the following LDIF (and tried lots of variations too, to no avail) dn : cn={9}test,cn=schema,cn=config changetype: modify delete: olcObjectClasses olcObjectClasses: ( testObjectClass:1 NAME 'class1' DESC 'objectclass 1' SUP top STRUCTURAL MUST ( attr1 $ attr2 ) ) Running ldapmodify -x -W -D cn=admin,cn=config -f test.ldif -d 0 gives no output. -d 1 gives this: ldap_create ldap_sasl_bind ldap_send_initial_request ldap_new_connection 1 1 0 ldap_int_open_connection ldap_connect_to_host: TCP localhost:389 ldap_new_socket: 4 ldap_prepare_socket: 4 ldap_connect_to_host: Trying 127.0.0.1:389 ldap_pvt_connect: fd: 4 tm: -1 async: 0 ldap_open_defconn: successful ldap_send_server_request ber_scanf fmt ({it) ber: ber_scanf fmt ({i) ber: ber_flush2: 38 bytes to sd 4 ldap_result ld 0x7f2a8ccf3430 msgid 1 wait4msg ld 0x7f2a8ccf3430 msgid 1 (infinite timeout) wait4msg continue ld 0x7f2a8ccf3430 msgid 1 all 1 ** ld 0x7f2a8ccf3430 Connections: * host: localhost port: 389 (default) refcnt: 2 status: Connected last used: Mon Sep 10 11:29:57 2012 ** ld 0x7f2a8ccf3430 Outstanding Requests: * msgid 1, origid 1, status InProgress outstanding referrals 0, parent count 0 ld 0x7f2a8ccf3430 request count 1 (abandoned 0) ** ld 0x7f2a8ccf3430 Response Queue: Empty ld 0x7f2a8ccf3430 response count 0 ldap_chkResponseList ld 0x7f2a8ccf3430 msgid 1 all 1 ldap_chkResponseList returns ld 0x7f2a8ccf3430 NULL ldap_int_select read1msg: ld 0x7f2a8ccf3430 msgid 1 all 1 ber_get_next ber_get_next: tag 0x30 len 12 contents: read1msg: ld 0x7f2a8ccf3430 msgid 1 message type bind ber_scanf fmt ({eAA) ber: read1msg: ld 0x7f2a8ccf3430 0 new referrals read1msg: mark request completed, ld 0x7f2a8ccf3430 msgid 1 request done: ld 0x7f2a8ccf3430 msgid 1 res_errno: 0, res_error: <>, res_matched: <> ldap_free_request (origid 1, msgid 1) ldap_parse_result ber_scanf fmt ({iAA) ber: ber_scanf fmt (}) ber: ldap_msgfree ldap_free_connection 1 1 ldap_send_unbind ber_flush2: 7 bytes to sd 4 ldap_free_connection: actually freed So no real indication of an error. Where am I doing it wrong? Bonus question: If I have some entries of a certain objectclass, can I modify it (add/remove attributeTypes) without removing the entries? Thanks in advance for all help.

    Read the article

  • Bridging 10GbE with 12.04 - bridging works but the bridging computer has no internet access

    - by Donal
    I have been trying to get 12.04 bridging working with two 10GbE cards. I have 2 10GbE cards in a linux box being used only for this bridge, 1 with 2 10GbaseT ports and another with a single CX4 port. I have 2 client computers connected with 10GbaseT cards and the CX4 card connects to a procurve switch. I can get the bridging happening mostly the way that I want, The clients receive dhcp information from the dhcp server (not the bridging machine) and can connect to and properly see the rest of the network. Speeds are ok, not amazing but working on that is another matter. My problem is that the bridging machine has no internet access ... meaning I can't update anything or apt-get anything It can ping all other machines on the local network. I've tried the helpful hints from: https://help.ubuntu.com/community/NetworkConnectionBridge "Enabling Internet Use on the Bridging Computer" and get the following RTNETLINK answers: File exists but dhclient br0 does nothing for me :( I think if it is anything it a multiple route problem as both br0 and eth4 have ipaddresses ... even though I have only set it up so that br0 has one ... Bridge setup details: /etc/network/interface auto br0 iface br0 inet static address 192.168.0.246 netmask 255.255.255.0 gateway 192.168.0.1 broadcast 192.168.0.255 dns-nameservers 192.168.0.1 dns-search example.com dns-domain example.com #(eth2 & eth3 are the 10GbaseT) #(eth4 is the CX4 connection) pre-up ip link set eth2 down pre-up ip link set eth3 down pre-up ip link set eth4 down pre-up brctl addbr br0 pre-up brctl addif br0 eth4 eth3 eth2 pre-up ip addr flush dev eth3 pre-up ip addr flush dev eth2 pre-up ip addr flush dev eth4 post-down ip link set eth4 down post-down ip link set eth2 down post-down ip link set eth3 down post-down ip link set br0 down post-down brctl delif br0 eth2 eth3 eth4 post-down brctl delbr br0 ifconfig -a br0 Link encap:Ethernet HWaddr 00:15:17:22:20:34 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe22:2034/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4957 errors:0 dropped:0 overruns:0 frame:0 TX packets:1077 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:596320 (596.3 KB) TX bytes:139952 (139.9 KB) eth4 Link encap:Ethernet HWaddr 00:60:dd:47:7c:05 inet addr:192.168.0.57 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::260:ddff:fe47:7c05/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:15391 errors:0 dropped:51 overruns:0 frame:0 TX packets:1207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5916769 (5.9 MB) TX bytes:154312 (154.3 KB) Interrupt:70 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth4 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth4

    Read the article

  • Data Transformation Pipeline

    - by davenewza
    I have create some kind of data pipeline to transform coordinate data into more useful information. Here is the shell of pipeline: public class PositionPipeline { protected List<IPipelineComponent> components; public PositionPipeline() { components = new List<IPipelineComponent>(); } public PositionPipelineEntity Process(Position position) { foreach (var component in components) { position = component.Execute(position); } return position; } public PositionPipeline RegisterComponent(IPipelineComponent component) { components.Add(component); return this; } } Every IPipelineComponent accepts and returns the same type - a PositionPipelineEntity. Code: public interface IPipelineComponent { PositionPipelineEntity Execute(PositionPipelineEntity position); } The PositionPipelineEntity needs to have many properties, many which are unused in certain components and required in others. Some properties will also have become redundant at the end of the pipeline. For example, these components could be executed: TransformCoordinatesComponent: Parse the raw coordinate data into a Coordinate type. DetermineCountryComponent: Determine and stores country code. DetermineOnRoadComponent: Determine and store whether coordinate is on a road. Code: pipeline .RegisterComponent(new TransformCoordinatesComponent()) .RegisterComponent(new DetermineCountryComponent()) .RegisterComponent(new DetermineOnRoadComponent()); pipeline.Process(positionPipelineEntity); The PositionPipelineEntity type: public class PositionPipelineEntity { // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLatitude { get; set; } // Only relevant to the TransformCoordinatesComponent public decimal RawCoordinateLongitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLatitude { get; set; } // Required by all components after TransformCoordinatesComponent public Coordinate CoordinateLongitude { get; set; } // Set in DetermineCountryComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public string CountryCode { get; set; } // Set in DetermineOnRoadComponent, not required anywhere. // Requires CoordinateLatitude and CoordinateLongitude (TransformCoordinatesComponent) public bool OnRoad { get; set; } } Problems: I'm very concerned about the dependency that a component has on properties. The way to solve this would be to create specific types for each component. The problem then is that I cannot chain them together like this. The other problem is the order of components in the pipeline matters. There is some dependency. The current structure does not provide any static or runtime checking for such a thing. Any feedback would be appreciated.

    Read the article

  • What's needed in a complete ASP.NET environment?

    - by Christian W
    We have a ASP3.0 application with a few ASP.NET (2.0) dittys mixed in. (Our longtime goal is to migrate everything to ASP.NET but that's not important for this issue) Our current test/deploy workflow is like this: 1 Use notepad++ or VS2008 to fix a bug/feature (depending on what I have open) 2 Open my virtual test-server 3 Copy the fixed file over, either with explorer, or if I can be bothered to open it, WinMerge 4 Test that the fix works 5 Close the virtual test-server 6 Connect to our host with VPN 7 Use WinMerge to update the files necessary 8 Pray to higher powers that the production environment is not so different that something bombs. To make things worse, only I have access to my "test-server". So I'm the only one testing it. I really want to make this a bit more robust, I even have a subversion setup running. But I always forget to commit changes... And I don't even work in my checked out folder, but a copy of what is currently in production... Can someone recommend some good reading on deploying, testing, staging and stuff like that. I currently use VS2008 and want to use subversion or GIT (or any other free VCS). Since I'm the only developer, teamsystem is not really an option (cost-related). I have found myself developing an "improved" feature, only to find a bug in the same feature in the production system. And since my "improved" feature incorporated deleting some old functionality, I have to fix bugs directly in production... That's not a fun feeling... (I have inherited this system recently... So it's not directly my fault that it is like this ;) )

    Read the article

  • FREE goodies if you are a UK based software house already live on the Windows Azure Platform

    - by Eric Nelson
    In the UK we have seen some fantastic take up around the Windows Azure Platform and we have lined up some great stuff in 2011 to help companies fully exploit the Cloud – but we need you to tell us what you are up to! Once you tell us about your plans around Windows Azure, you will get access to FREE benefits including email based developer support and free monthly allowance of Windows Azure, SQL Azure and AppFabric from Jan 2011 – and more! (This offer is referred to as Cloud Essentials and is explained here) And… we will be able to plan the right amount of activity to continue to help early adopters through 2011. Step 1: Sign up your company to Microsoft Platform Ready (you will need a windows live id to do this) Step 2: Add your applications For each application, state your intention around Windows Azure (and SQL etc if you so wish) Step 3: Verify your application works on the Windows Azure Platform Step 4 (Optional): Test your application works on the Windows Azure Platform Download the FREE test tool. Test your application with it and upload the successful results. Step 5: Revisit the MPR site in early January to get details of Cloud Essentials and other benefits P.S. You might want some background on the “fantastic take up” bit: We helped over 3000 UK companies deploy test applications during the beta phase of Windows Azure We directly trained over 1000 UK developers during 2010 We already have over 100 UK applications profiled on the Microsoft Platform Ready site And in a recent survey of UK ISVs you all look pretty excited around Cloud – 42% already offer their solution on the Cloud or plan to.

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Making a Camera look at a target Vector

    - by Peteyslatts
    I have a camera that works as long as its stationary. Now I'm trying to create a child class of that camera class that will look at its target. The new addition to the class is a method called SetTarget(). The method takes in a Vector3 target. The camera wont move but I need it to rotate to look at the target. If I just set the target, and then call CreateLookAt() (which takes in position, target, and up), when the object gets far enough away and underneath the camera, it suddenly flips right side up. So I need to transform the up vector, which currently always stays at Vector3.Up. I feel like this has something to do with taking the angle between the old direction vector and the new one (which I know can be expressed by target - position). I feel like this is all really vague, so here's the code for my base camera class: public class BasicCamera : Microsoft.Xna.Framework.GameComponent { public Matrix view { get; protected set; } public Matrix projection { get; protected set; } public Vector3 position { get; protected set; } public Vector3 direction { get; protected set; } public Vector3 up { get; protected set; } public Vector3 side { get { return Vector3.Cross(up, direction); } protected set { } } public BasicCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game) { this.position = position; this.direction = target - position; this.up = up; CreateLookAt(); projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.PiOver4, (float)Game.Window.ClientBounds.Width / (float)Game.Window.ClientBounds.Height, 1, 500); } public override void Update(GameTime gameTime) { // TODO: Add your update code here CreateLookAt(); base.Update(gameTime); } } And this is the code for the class that extends the above class to look at its target. class TargetedCamera : BasicCamera { public Vector3 target { get; protected set; } public TargetedCamera(Game game, Vector3 position, Vector3 target, Vector3 up) : base(game, position, target, up) { this.target = target; } public void SetTarget(Vector3 target) { direction = target - position; } protected override void CreateLookAt() { view = Matrix.CreateLookAt(position, target, up); } }

    Read the article

  • Virtualhost Wildcard Subdomains

    - by Khuram
    We have one static IP on which we have routed our company website. We have setup a local machine on windows with WAMP to run our testing server. We want virtual hosts to test our different apps. However, when creating subdomains, we have a new project which uses wildcard subdomains. How can we create the wildcard subdomains in VirtualHosts. We use, NameVirtualHost * <VirtualHost *> ServerAdmin admin@test DocumentRoot "E:/Wamp/www/corporate" ServerName companysite.com </VirtualHost> <VirtualHost *> ServerAdmin admin@test DocumentRoot "E:/Wamp/www/project" ServerName project.companysite.com </VirtualHost> <VirtualHost *> ServerAdmin admin@test DocumentRoot "E:/Wamp/www/project" ServerName *.project.companysite.com </VirtualHost> However, the last * wildcard does not work. Any help?

    Read the article

  • openssl creates invalid signature if run by a different user

    - by divB
    Very strange problem here: openssl successfully creates signatures but only those created as root are valid whereas created by another user (www-data) are invalid! All files are readable and there are not error messages: # echo -ne Test | openssl dgst -ecdsa-with-SHA1 -sign activation.key > /tmp/asRoot.der # su www-data $ echo -ne Test | openssl dgst -ecdsa-with-SHA1 -sign activation.key > /tmp/asWww-data.der $ uname -a Linux linux 2.6.32-5-openvz-amd64 #1 SMP Mon Feb 25 01:16:25 UTC 2013 i686 GNU/Linux $ cat /etc/debian_version 6.0.7 Both files (asRoot.der and asWww-data.der) are transfered to a different computer for verification with the public key: $ echo -ne Test | openssl dgst -verify activation.pub -keyform DER -signature asRoot.der Verified OK $ echo -ne Test | openssl dgst -verify activation.pub -keyform DER -signature asWww-data.der Verification Failure That can't be true! What's wrong here?

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • Contract / Project / Line-Item hierarchy design considerations

    - by Ryan
    We currently have an application that allows users to create a Contract. A contract can have 1 or more Project. A project can have 0 or more sub-projects (which can have their own sub-projects, and so on) as well as 1 or more Line. Lines can have any number of sub-lines (which can have their own sub-lines, and so on). Currently, our design contains circular references, and I'd like to get away from that. Currently, it looks a bit like this: public class Contract { public List<Project> Projects { get; set; } } public class Project { public Contract OwningContract { get; set; } public Project ParentProject { get; set; } public List<Project> SubProjects { get; set; } public List<Line> Lines { get; set; } } public class Line { public Project OwningProject { get; set; } public List ParentLine { get; set; } public List<Line> SubLines { get; set; } } We're using the M-V-VM "pattern" and use these Models (and their associated view models) to populate a large "edit" screen where users can modify their contracts and the properties on all of the objects. Where things start to get confusing for me is when we add, for example, a Cost property to the Line. The issue is reflecting at the highest level (the contract) changes made to the lowest level. Looking for some thoughts as to how to change this design to remove the circular references. One thought I had was that the contract would have a Dictionary<Guid, Project> which would contain ALL projects (regardless of their level in hierarchy). The Project would then have a Guid property called "Parent" which could be used to search the contract's dictionary for the parent object. THe same logic could be applied at the Line level. Thanks! Any help is appreciated.

    Read the article

  • Mocking concrete class - Not recommended

    - by Mik378
    I've just read an excerpt of "Growing Object-Oriented Software" book which explains some reasons why mocking concrete class is not recommended. Here some sample code of a unit-test for the MusicCentre class: public class MusicCentreTest { @Test public void startsCdPlayerAtTimeRequested() { final MutableTime scheduledTime = new MutableTime(); CdPlayer player = new CdPlayer() { @Override public void scheduleToStartAt(Time startTime) { scheduledTime.set(startTime); } } MusicCentre centre = new MusicCentre(player); centre.startMediaAt(LATER); assertEquals(LATER, scheduledTime.get()); } } And his first explanation: The problem with this approach is that it leaves the relationship between the objects implicit. I hope we've made clear by now that the intention of Test-Driven Development with Mock Objects is to discover relationships between objects. If I subclass, there's nothing in the domain code to make such a relationship visible, just methods on an object. This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I can't figure out exactly what he means when he says: This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I understand that the service corresponds to MusicCentre's method called startMediaAt. What does he mean by "elsewhere"? The complete excerpt is here: http://www.mockobjects.com/2007/04/test-smell-mocking-concrete-classes.html

    Read the article

  • iis7 .net webservice 404 error

    - by agilenoob
    I have a webservice /test/Service1.asmx in the same folder as a page /test/test.aspx. The page works fine but I get the message bellow for the services in the same location. I know the file is there and the url is correct, and I have added the script module and managed handler as well. If anyone knows what I'm missing here I'd appreciate it Server Error in '/' Application. The resource cannot be found. Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly. Requested URL: /test/Service1.asmx Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016 FAILED REQUEST LOG: ModuleName ManagedPipelineHandler Notification 128 HttpStatus 404 HttpReason Not Found HttpSubStatus 0 ErrorCode 0 ConfigExceptionInfo Notification EXECUTE_REQUEST_HANDLER ErrorCode The operation completed successfully. (0x0)

    Read the article

  • How to add delay in autologin

    - by raj
    I enabled autologin in my system (CentOS 6.2) for that I edited this file /etc/gdm/custom.conf In that I entered this code [daemon] AutomaticLoginEnable=true AutomaticLogin=test Here test means one account name, for that account autologin is working but the problem is not possible to logout. That is because everytime while I logout it will go to gdm(graphical display manager) and there it Again checks for account test. It is available right so it will again login to same account. Here I want add delay, that means it should wait for sometime, If no one login to any other account, then only test account will log. how to add delay?

    Read the article

  • How can I check if PHP was compiled with the UNICODE version of the Win32 API?

    - by Wesley Murch
    This is related to this Stack Overflow post: glob() can't find file names with multibyte characters on Windows? I'm having issues with PHP and files that have multibyte characters on Windows. Here's my test case: print_r(scandir('./uploads/')); print_r(glob('./uploads/*')); Correct Output on remote UNIX server: Array ( [0] => . [1] => .. [2] => filename-äöü.jpg [3] => filename.jpg [4] => test?test.jpg [5] => ??? ?????.jpg [6] => ?????????.jpg [7] => ???.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg [2] => ./uploads/test?test.jpg [3] => ./uploads/??? ?????.jpg [4] => ./uploads/?????????.jpg [5] => ./uploads/???.jpg ) Incorrect Output locally on Windows: Array ( [0] => . [1] => .. [2] => ??? ?????.jpg [3] => ???.jpg [4] => ?????????.jpg [5] => filename-äöü.jpg [6] => filename.jpg [7] => test?test.jpg ) Array ( [0] => ./uploads/filename-äöü.jpg [1] => ./uploads/filename.jpg ) Here's a relevant excerpt from the answer I chose to accept (which actually is a quote from an article that was posted online over 2 years ago): From the comments on this article: http://www.rooftopsolutions.nl/blog/filesystem-encoding-and-php The output from your PHP installation on Windows is easy to explain : you installed the wrong version of PHP, and used a version not compiled to use the Unicode version of the Win32 API. For this reason, the filesystem calls used by PHP will use the legacy "ANSI" API and so the C/C++ libraries linked with this version of PHP will first try to convert yout UTF-8-encoded PHP string into the local "ANSI" codepage selected in the running environment (see the CHCP command before starting PHP from a command line window) Your version of Windows is MOST PROBABLY NOT responsible of this weird thing. Actually, this is YOUR version of PHP which is not compiled correctly, and that uses the legacy ANSI version of the Win32 API (for compatibility with the legacy 16-bit versions of Windows 95/98 whose filesystem support in the kernel actually had no direct support for Unicode, but used an internal conversion layer to convert Unicode to the local ANSI codepage before using the actual ANSI version of the API). Recompile PHP using the compiler option to use the UNICODE version of the Win32 API (which should be the default today, and anyway always the default for PHP installed on a server that will NEVER be Windows 95 or Windows 98...) I can't confirm whether this is my problem or not. I used phpinfo() and did not find anything interesting, but I wasn't sure what to look for. I've been using XAMPP for easy installations, so I'm really not sure exactly how it was installed. I'm using Windows 7, 64 bit - so forgive my ignorance, but I'm not even sure if "Win32" is relevant here. How can I check if my current version of PHP was compiled with the configuration mentioned above? PHP Version: 5.3.8 System: Windows NT WES-PC 6.1 build 7601 (Windows 7 Home Premium Edition Service Pack 1) i586 Build Date: Aug 23 2011 11:47:20 Compiler: MSVC9 (Visual C++ 2008) Architecture: x86 Configure Command: cscript /nologo configure.js "--enable-snapshot-build" "--disable-isapi" "--enable-debug-pack" "--disable-isapi" "--without-mssql" "--without-pdo-mssql" "--without-pi3web" "--with-pdo-oci=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8-11g=D:\php-sdk\oracle\instantclient11\sdk,shared" "--enable-object-out-dir=../obj/" "--enable-com-dotnet" "--with-mcrypt=static" "--disable-static-analyze"

    Read the article

  • What is the value to checking in broken unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in broken unit tests? I will use a simple example. Case sensitivity. The current code is Case Sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests has found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • How to refactor a myriad of similar classes

    - by TobiMcNamobi
    I'm faced with similar classes A1, A2, ..., A100. Believe it or not but yeah, there are roughly hundred classes that almost look the same. None of these classes are unit tested (of course ;-) ). Each of theses classes is about 50 lines of code which is not too much by itself. Still this is way too much duplicated code. I consider the following options: Writing tests for A1, ..., A100. Then refactor by creating an abstract base class AA. Pro: I'm (near to totally) safe by the tests that nothing goes wrong. Con: Much effort. Duplication of test code. Writing tests for A1, A2. Abstracting the duplicated test code and using the abstraction to create the rest of the tests. Then create AA as in 1. Pro: Less effort than in 1 but maintaining a similar degree of safety. Con: I find generalized test code weird; it often seems ... incoherent (is this the right word?). Normally I prefer specialized test code for specialized classes. But that requires a good design which is my goal of this whole refactoring. Writing AA first, testing it with mock classes. Then inheriting A1, ..., A100 successively. Pro: Fastest way to eliminate duplicates. Con: Most Ax classes look very much the same. But if not, there is the danger of changing the code by inheriting from AA. Other options ... At first I went for 3. because the Ax classes are really very similar to each other. But now I'm a bit unsure if this is the right way (from a unit testing enthusiast's perspective).

    Read the article

  • Remote script execution on Windows 2003 server - alternatives to PSEXEC

    - by chickeninabiscuit
    We are wanting to deploy our application to our Test server from our Hudson server. I'd like to be able to have hudson copy the application files and start a script that would run locally on our Test server. We can't use psexec because of a cross domain policy. Currently we are doing this manually, by RDPing to the Test server and checking out the code from subversion manually. Are there alternatives to PSExec that can bypass the cross domain policy problem?

    Read the article

  • Can only access asp.net app on localhost

    - by Kevin Donn
    I'm trying to get an asp.net application up on IIS on a Windows Server 2008 machine. I can hit the app from localhost, no problem. But I can't access the app using the server's domain name either locally or from another machine on the network. But here's the odd part. I can access a normal file on IIS using the domain name, both from a browser running on the server and from a browser running on another machine on the network. Here's a synopsis ("http" converted to "htp" below because I don't have enough points to have all these links in my message): From IE on the server itself: works htp://localhost/foo.htm works htp://localhost/App works htp://test.foo.com/foo.htm dead htp://test.foo.com/App From IE on another machine (inside or outside my subnet): works htp://test.foo.com/foo.htm dead htp://test.foo.com/App And when I say "dead" I mean the request times out. Any ideas?

    Read the article

  • DRBD stacked resources: recovering from failure

    - by Marcus Downing
    We're running a stacked four-node DRBD setup like this: A --> B | | v v C D This means three DRBD resources running across these four servers. Servers A and B are Xen hosts running VMs, while servers C and D are for backups. A is in the same datacentre as C. From server A to server C, in the first datacentre, using protocol B From server B to server D, in the second datacentre, using protocol B From server A to server B, different datacentres, stacked resource using protocol A First question: booting a stacked resource We haven't got any vital data running on this setup yet - we're still making sure it works first. This means simulating power cuts, network outages etc and seeing what steps we need to recover. When we pull the power out of server A, both resources go down; it attempts to bring them back up at next boot. However, it only succeeds at bringing up the lower-level resource, A-C. The stacked resource A-B doesn't even try to connect, presumably because it can't find the device until it's a connected primary on the lower level. So if anything goes wrong we need to manually log in and bring that resource up, then start the virtual machine on top of it. Second question: setting the primary of a stacked resource Our lower-level resources are configured so that the right one is considered primary: resource test-AC { on A { ... } on C { ... } startup { become-primary-on A; } } But I don't see any way to do the same with a stacked resource, as the following isn't a valid config: resource test-AB { stacked-on-top-of test-AC { ... } stacked-on-top-of test-BD { ... } startup { become-primary-on test-AC; } } This too means that recovering from a failure requires manual intervention. Is there no way to set the automatic primary for a stacked resource?

    Read the article

  • mount issue in ubuntu 12.10

    - by Vipin Ms
    I'm having issue with latest Ubuntu 12.10. Let me make it more clear. I'm having the following partitions in my Laptop. Device Boot Start End Blocks Id System /dev/sda1 2048 39997439 19997696 83 Linux /dev/sda2 * 40001850 81947564 20972857+ 83 Linux /dev/sda3 81947565 123877214 20964825 83 Linux /dev/sda4 123887614 976773119 426442753 5 Extended /dev/sda5 123887616 333602815 104857600 83 Linux /dev/sda6 333604864 543320063 104857600 83 Linux /dev/sda7 543322112 753037311 104857600 83 Linux /dev/sda8 753039360 976773119 111866880 83 Linux I have also two users named "ms" and abc. Here ms is for administrative tasks and abc for my friends. When I mount any drive under "abc" user, I cannot access it under my other user "ms". Same as in the case with "ms" user. I found possible reason behind the issue. When I mount any drive under "abc" user, Ubuntu will try to mount it under "/media/abc/volume_name" instead of "/media/volume_name" . Same as in the case with "ms" user. # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 19G 11G 7.5G 59% / udev 1.5G 4.0K 1.5G 1% /dev tmpfs 599M 896K 598M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.5G 620K 1.5G 1% /run/shm none 100M 92K 100M 1% /run/user /dev/sda2 20G 172M 19G 1% /media/abc/TEST /dev/sdb1 466G 353G 114G 76% /media/abc/F088F74288F7063E /dev/sdb2 466G 318G 148G 69% /media/abc/New Volume /dev/sda5 99G 94G 323M 100% /media/abc/Songs /dev/sda6 99G 31G 63G 34% /media/ms/Films Here, you can see that "TEST" was mounted under "/media/abc/TEST". When I try to access the already mounted partition named '/media/abc/TEST" in my "ms" session I'm getting the following error. How to fix this error? Is it a bug? Is there any way to fix this without modifying the underlying file-system structure?

    Read the article

  • GMail detecting mail as spam

    - by Petru Toader
    I've been trying for a long time to get our company's mail server send mail that will get accepted by the GMail spam filter. I have managed making it work for Yahoo Mail and Hotmail, sadly GMail is still marking our mails as spam. I have configured DKIM, SPF, DMARC and verified our mail server IP address against blacklists. I also have pasted here the headers GMail gets when we send a mail. Delivered-To: [email protected] Received: by 10.42.215.6 with SMTP id hc6csp107427icb; Wed, 20 Aug 2014 07:34:26 -0700 (PDT) X-Received: by 10.194.100.34 with SMTP id ev2mr59101019wjb.76.1408545265402; Wed, 20 Aug 2014 07:34:25 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.phyramid.com (mail.phyramid.com. [178.157.82.23]) by mx.google.com with ESMTPS id dj10si4827754wib.79.2014.08.20.07.34.24 for <[email protected]> (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 20 Aug 2014 07:34:25 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 178.157.82.23 as permitted sender) client-ip=178.157.82.23; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 178.157.82.23 as permitted sender) [email protected]; dkim=pass [email protected] Received: from localhost (localhost [127.0.0.1]) by mail.phyramid.com (Postfix) with ESMTP id ED2BB2017AC for <[email protected]>; Wed, 20 Aug 2014 17:33:23 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=phyramid.com; h= content-type:content-type:mime-version:x-mailer:subject:subject :message-id:to:from:from:date:date; s=dkim; t=1408545197; x= 1409409197; bh=e04RtoyF7G39lfCvA9LLhTz4nF64siZtN5IYmC18Xsc=; b=o +6mO8Uz4Uf1G4U2q6tKUiEy2N2n/5R2VtPPwIvBE5xzK/hEd2sDGMxVzQVgIDCsK Q0Xh+auPaQpxldQ+AEcL2XSZMrk/g0mJONjkpI19I5AwGIJCR1SVvxdecohTn9iR bCHzrGi2wAicfDBzOH6lUBNfh2thri79aubdCYc97U= X-Amavis-Modified: Mail body modified (using disclaimer) - mail.phyramid.com X-Virus-Scanned: Debian amavisd-new at mail.phyramid.com Received: from mail.phyramid.com ([127.0.0.1]) by localhost (mail.phyramid.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 3JcgXZAXeFtX for <[email protected]>; Wed, 20 Aug 2014 17:33:17 +0300 (EEST) Received: from whiterock.local (unknown [109.98.21.30]) by mail.phyramid.com (Postfix) with ESMTPSA id 05CAE200280 for <[email protected]>; Wed, 20 Aug 2014 17:33:15 +0300 (EEST) Date: Wed, 20 Aug 2014 17:34:15 +0300 From: Company Mail <[email protected]> To: [email protected] Message-ID: <[email protected]> Subject: hey there! X-Mailer: Airmail (247) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Content-Disposition: inline How was your summer? ---- Thanks a lot!

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >