Search Results

Search found 35507 results on 1421 pages for 'performance test'.

Page 340/1421 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • How can I get PowerShell Added-Types to use Added References

    - by Scott Weinstein
    I'm working on a PoSh project that generates CSharp code, and then Add-Types it into memory. The new types use existing types in an on disk DLL, which is loaded via Add-Type. All is well and good untill I actualy try to invoke methods on the new types. Here's an example of what I'm doing: $PWD = "." rm -Force $PWD\TestClassOne* $code = " namespace TEST{ public class TestClassOne { public int DoNothing() { return 1; } } }" $code | Out-File tcone.cs Add-Type -OutputAssembly $PWD\TestClassOne.dll -OutputType Library -Path $PWD\tcone.cs Add-Type -Path $PWD\TestClassOne.dll $a = New-Object TEST.TestClassOne "Using TestClassOne" $a.DoNothing() "Compiling TestClassTwo" Add-Type -Language CSharpVersion3 -TypeDefinition " namespace TEST{ public class TestClassTwo { public int CallTestClassOne() { var a = new TEST.TestClassOne(); return a.DoNothing(); } } }" -ReferencedAssemblies $PWD\TestClassOne.dll "OK" $b = New-Object TEST.TestClassTwo "Using TestClassTwo" $b.CallTestClassOne() Running the above script gives the following error on the last line: Exception calling "CallTestClassOne" with "0" argument(s): "Could not load file or assembly 'TestClassOne,...' or one of its dependencies. The system cannot find the file specified." At AddTypeTest.ps1:39 char:20 + $b.CallTestClassOne <<<< () + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException What am I doing wrong?

    Read the article

  • remove element in javascript

    - by Hulk
    In the below code how to remove the hyperlink after getting the innerHTML function test(obj) { var a=obj.innerHTML // remove obj element here } $p = $('<a id="name" onclick="javascript:var ele=test(this);">').html( "test" ); $('#questions').append( $p ); Thanks..

    Read the article

  • System.EntryPointNotFoundException:while importing libsrp in a c# code under ubuntu

    - by Hema Joshi
    hi, i am importing libsrp.so in a c# code under ubuntu.my code is using System; using System.IO; using System.Text; using System.Runtime.InteropServices; namespace Main { public static class Test { [DllImport("libsrp.so" ,EntryPoint = "SRP_initialize_library", CallingConvention=CallingConvention.Cdecl)] public static extern int SRP_initialize_library(); [DllImport("libsrp.so" ,EntryPoint = "SRP_finalize_library", CallingConvention=CallingConvention.Cdecl)] public static extern int SRP_finalize_library(); } public class Test1 { public static void Main( string[] args ) { Console.Write("output is:", Test.SRP_initialize_library()); Test. SRP_finalize_library(); Console.Write("\n"); } } } but while runnign the code using mono i am finding error Unhandled Exception: System.EntryPointNotFoundException: SRP_initialize_library at (wrapper managed-to-native) Main.Test:SRP_initialize_library () at Main.Test1.Main (System.String[] args) [0x00000] i am unable to find what is the problem? please tell me where is the problem?

    Read the article

  • ocaml pattern match question

    - by REALFREE
    I'm trying to write a simple recursive function that look over list and return a pair of integer. This is easy to write in c/c++/java but i'm new to ocaml so somehow hard to find out the solution due to type conflict it goes like.. let rec test l = match l with [] - 0 | x::xs - if x 0 then (1+test, 0) else (0, 1+test);; I kno this is not correct one and kinda awkward.. but any help will be appreciated

    Read the article

  • What are the differences among sqlite3 from python2.5, pysqlite and apsw

    - by leo
    Hi, I would like to know the differences among sqlite3 from python2.5, pysqlite and apsw? I have a bumpy run when trying to install pysqlite on windows vista with python2.5, see following: download sqlite from http://sqlite.org/download.html and unzip them into windows/system32 folder and put sqlite3.dll into c:/python25/Lib folder download pysqlite windows installer when trying to run following in python shell: >>> from pysqlite2 import test Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pysqlite2\test\__init__.py", line 35, in <module> from pysqlite2.test import dbapi, types, userfunctions, factory, transactions,\ File "pysqlite2\test\dbapi.py", line 27, in <module> import pysqlite2.dbapi2 as sqlite File "pysqlite2\dbapi2.py", line 27, in <module> from pysqlite2._sqlite import * ImportError: No module named _sqlite I am wondering anybody with experiences of the above three types of sqlite binding to python can comment their pros and cons such as performances I am wondering is it worthwhile to try the pysqlite or apsw thanks

    Read the article

  • HTTP Error: 400 when sending msmq message over http

    - by dontera
    I am developing a solution which will utilize msmq to transmit data between two machines. Due to the seperation of said machines, we need to use HTTP transport for the messages. In my test environment I am using a Windows 7 x64 development machine, which is attempting to send messages using a homebrew app to any of several test machines I have control over. All machines are either windows server 2003 or server 2008 with msmq and msmq http support installed. For any test destination, I can use the following queue path name with success: FORMATNAME:DIRECT=TCP:[machine_name_or_ip]\private$\test_queue But for any test destination, the following always fails FORMATNAME:DIRECT=HTTP://[machine_name_or_ip]/msmq/private$/test_queue I have used all permutations of machine names/ips available. I have created mappings using the method described at this blog post. All result in the same HTTP Error: 400. The following is the code used to send messages: MessageQueue mq = new MessageQueue(queuepath); System.Messaging.Message msg = new System.Messaging.Message { Priority = MessagePriority.Normal, Formatter = new XmlMessageFormatter(), Label = "test" }; msg.Body = txtMessageBody.Text; msg.UseDeadLetterQueue = true; msg.UseJournalQueue = true; msg.AcknowledgeType = AcknowledgeTypes.FullReachQueue | AcknowledgeTypes.FullReceive; msg.AdministrationQueue = new MessageQueue(@".\private$\Ack"); if (SendTransactional) mq.Send(msg, MessageQueueTransactionType.Single); else mq.Send(msg); Additional Information: in the IIS logs on the destination machines I can see each message I send being recorded as a POST with a status code of 200. I am open to any suggestions.

    Read the article

  • Multiple Instances of Static Singleton

    - by Nexus
    I've recently been working with code that looks like this: using namespace std; class Singleton { public: static Singleton& getInstance(); int val; }; Singleton &Singleton::getInstance() { static Singleton s; return s; } class Test { public: Test(Singleton &singleton1); }; Test::Test(Singleton &singleton1) { Singleton singleton2 = Singleton::getInstance(); singleton2.val = 1; if(singleton1.val == singleton2.val) { cout << "Match\n"; } else { cout << "No Match " << singleton1.val << " - " << singleton2.val << "\n"; } } int main() { Singleton singleton = Singleton::getInstance(); singleton.val = 2; Test t(singleton); } Every time I run it I get "No Match". From what I can tell when stepping through with GDB is that there are two instances of the Singleton. Why is this?

    Read the article

  • Why `#import("dart:unittest")` can't run?

    - by Freewind
    I write some dart test code: #import("dart:unittest"); main() { test('this is a test', () { int x = 2+3; expect(x).equals(5); }); } It doesn't display any error in dart editor, but when I press the "run" button, it reports: Do not know how to load 'dart:unittest''file:///home/freewind/dev/dart/editor /samples/shuzu.org/test/model_test.dart': Error: line 1 pos 1: library handler failed #import("dart:unittest"); ^ I see there is a "dart:unittest" library in my dart-sdk. Why it can't be run?

    Read the article

  • Multiple Mod_ReWrites on one site - Possible? (Wordpress blog in root directory, CodeIgniter project

    - by Sootah
    Currently I am creating a project with CodeIgniter that is contained within a subdirectory of my domain. For this example we'll call it domain.com/test. I also have Wordpress installed on this domain, and all of its files are in the root. For instance, if you navigate to my domain.com then it pulls up the Wordpress blog. I currently have the Wordpress mod_rewrite activated so that it uses friendly-URLs. For those of you that aren't familiar with CodeIgniter, all requests are routed through index.php in the project's root folder. So, in this case, it'd be domain.com/text/index.php. A request to the application would be sent like domain.com/test/index.php/somecontroller/method. What I'd like to do, is for any incoming request that is directed towards the /test/ folder, or some subdirectory therein I'd like it to appropriately rewrite the URL so the index.php isn't included. (As per the example above, it'd end up being domain.com/test/somecontroller/method) For any OTHER request, meaning anything that's not within the /test/ directory, I would like it to route the request to Wordpress. I would imagine it's a simple RewriteCond to make it check to see if the request is for the /test/ directory or a subdirectory therein, but I've no idea how to do it. Perhaps you can't have more than one set of Rewrite Rules per site. I will include the recommended mod_rewrite rules for each application. Wordpress: (Currently used) <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> CodeIgniter: (Pulled from their Wiki) <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / #Removes access to the system folder by users. #Additionally this will allow you to create a System.php controller, #previously this would not have been possible. #'system' can be replaced if you have renamed your system folder. RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] #When your application folder isn't in the system folder #This snippet prevents user access to the application folder #Submitted by: Fabdrol #Rename 'application' to your applications folder name. RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] #Checks to see if the user is attempting to access a valid file, #such as an image or css document, if this isn't true it sends the #request to index.php RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] </IfModule> <IfModule !mod_rewrite.c> # If we don't have mod_rewrite installed, all 404's # can be sent to index.php, and everything works as normal. # Submitted by: ElliotHaughin ErrorDocument 404 /index.php </IfModule> Any and all help is much appreciated!! Thanks, -Sootah

    Read the article

  • Testing a Doctrine2 Entity with assertEquals results in fatal out-of-memory error

    - by Matt
    I have a PHPUnit test that's using a Doctrine2 custom repository and Doctrine Fixtures. I wanted to test that a query gave me back an expected entity from my fixture. But when I try $this->assertEquals($expectedEntity, $result);, I get Fatal error: out of memory. I'm guessing it is recursing into all the relations and the entity manager and whatnot. Is there a good way to test this equality? Should I just assertEquals on the IDs of the entities?

    Read the article

  • PHP gettext function only returns orignal untranslated string

    - by Camsoft
    I'm trying to use gettext add localisation support to my website. I've followed various guides on how to setup gettext and have done the following: I've created the following files and directories in the root of my project dir: test.php locale/ de_DE LC_MESSAGES messages.mo messages.po en_GB LC_MESSAGES messages.mo messages.po I've used Poedit to create the above .po and mo files. I've made sue it use Unix line endings, UTF-8 and set the language and country accordingly. I've then created a PHP script called test.php which has the following code: <?php define('LOCALE', 'de_DE'); // Set up environmental variables putenv("LC_ALL=" . LOCALE); setlocale(LC_ALL, LOCALE); bindtextdomain("messages", "./locale"); bind_textdomain_codeset("messages", LOCALE .".utf8"); textdomain("messages"); die(gettext('This is a test.')); ?> I've imported the text "This is a test." to Poedit and supplied the translation and saved it. But for some reason the test.php script will only output the original text untranslated. It refuses to load the version for the translation files. It's worth noting that the server is running Linux (Ubuntu), Apache 2.2.11 and PHP 5.2.6-3ubuntu4.5. I've checked phpinfo() and gettext is enabled. Can someone help me? Thanks.

    Read the article

  • Unittesting Url.Action (using Rhino Mocks?)

    - by Kristoffer Ahl
    I'm trying to write a test for an UrlHelper extensionmethod that is used like this: Url.Action<TestController>(x => x.TestAction()); However, I can't seem set it up correctly so that I can create a new UrlHelper and then assert that the returned url was the expected one. This is what I've got but I'm open to anything that does not involve mocking as well. ;O) [Test] public void Should_return_Test_slash_TestAction() { // Arrange RouteTable.Routes.Add("TestRoute", new Route("{controller}/{action}", new MvcRouteHandler())); var mocks = new MockRepository(); var context = mocks.FakeHttpContext(); // the extension from hanselman var helper = new UrlHelper(new RequestContext(context, new RouteData()), RouteTable.Routes); // Act var result = helper.Action<TestController>(x => x.TestAction()); // Assert Assert.That(result, Is.EqualTo("Test/TestAction")); } I tried changing it to urlHelper.Action("Test", "TestAction") but it will fail anyway so I know it is not my extensionmethod that is not working. NUnit returns: NUnit.Framework.AssertionException: Expected string length 15 but was 0. Strings differ at index 0. Expected: "Test/TestAction" But was: <string.Empty> I have verified that the route is registered and working and I am using Hanselmans extension for creating a fake HttpContext. Here's what my UrlHelper extentionmethod look like: public static string Action<TController>(this UrlHelper urlHelper, Expression<Func<TController, object>> actionExpression) where TController : Controller { var controllerName = typeof(TController).GetControllerName(); var actionName = actionExpression.GetActionName(); return urlHelper.Action(actionName, controllerName); } public static string GetControllerName(this Type controllerType) { return controllerType.Name.Replace("Controller", string.Empty); } public static string GetActionName(this LambdaExpression actionExpression) { return ((MethodCallExpression)actionExpression.Body).Method.Name; } Any ideas on what I am missing to get it working??? / Kristoffer

    Read the article

  • TFS Integration with Rational ClearQuest and Requirement Manager

    - by Kangkan
    I am working on an integration approach for integrating Rationa (IBM Jazz) Requirement Manager (RM) and Clear Quest (CQ) with TFS. As the teams are moving from ClearCase to TFS, what we are looking at is still being able to manage the requirements in RM and manage testing using CQ. The flow will be something like: Requirements are planned and detailed in RM Create work items in TFS connected to the Requirements in RM Create design and code using VS2010 and managing the version control in TFS Creating test plans and test cases in CQ (connected to requirements in RM) Run test against builds in TFS Publish test results in CQ against builds in TFS Run reports in RM, CQ and TFS that links up the items across the platforms. I have started looking at TFS Integration platform. But shall like to have your guidance for an early resolution and better solution approach.

    Read the article

  • HTTP request, strange socket behavoir

    - by hoodoos
    I expirience strange behavior when doing HTTP requests through sockets, here the request: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 10926 Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml Later on there goes body of my request with given content length. After that I recive something like: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 06:13:52 GMT So everything seem to be fine here. I read all contents from network stream and successfuly recieve response. But my socket which I'm doing polling on switches it's modes like that: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. WHY? here i send nothing really ) read ( here it switches to read back again ) last two steps can repeat several times. So I want to ask what leads for socket's mode change? And in this case it's not a big problem, but when I use gzip compression in my request ( no idea how it's related ) and ask server to send gzipped response to me like this: POST https://test.com:443/service/XMLSelect HTTP/1.1 Content-Length: 1076 Accept-Encoding: gzip Content-Encoding: gzip Host: test.com User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705) Authorization: Basic XXX SOAPAction: http://test.com/SubmitXml I recieve response like that: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Encoding: gzip Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Date: Tue, 30 Mar 2010 07:26:33 GMT 2000 ? I recieve a chunk size and GZIP header, it's all okay. And here's what is happening with my poor little socket meanwhile: write ( i write headers and request here ) read ( after headers sent i begin to recieve response ) write ( STRANGE BEHAVIOUR HERE. And it finally sits here forever waiting for me to send something! But if i refer to HTTP I don't have to send anything more! ) What can it be related to? What it wants me to send? Is it remote web server's problem or do I miss something? PS All actual service references and login/passwords replaced with fake ones :)

    Read the article

  • ASP.NET Applications Requests/Sec suddenly jumps to a value of about 70 million/sec. on 8 core web

    - by Subhrajit Roy
    We are doing performance testing of an ASP.NET web application with VSTS 2008. We start with 2000 users and slowly ramp up to 5000 users (reaches this user load at around 2.5 hours after the tests start, after this we stay at this user load). The total test duration is of about 6 hours During these runs we have found that the counter Requests/Sec (under category ASP.NET applications) suddenly spikes to a values of 36-72 millions !!!. This keeps on happening intermittently i.e we see this issue once in every 3 performance runs that we give on the same application. In our testing environment we have 4 web servers and interestingly enough we have found that this issue occurs only in the 8 core web servers. Summarizing ... Issue : The counter Requests/Sec (under category ASP.NET Applications) suddenly jumps to a value of about 70 million/sec. on 8 core web servers. This results in an increase in SQL server connections opened by the application. Response time goes for a toss. Error rates also show similar behaviour. However the counter ISAPI Extention Requests/sec does not show any abnormal increase. The graph of this counter almost overlaps with that of counter Requests/Sec till the time of the appearance of the spike.When the spike appears , this counter (ISAPI Extention Requests/sec) actually shows a drop. Test Settings : Performance test run with Visual Studio Team System 2008. Soak test run for 6 hours. Maximum user load 5000 users. This is load is attained at about 2.5 hours into the run and mainted for remaining duration.(i.e for around 3.5 more hrs) This issue is reproducible though happens intermittently. (i.e occurs one in three or four runs) Test Environment : Web site deployed on 4 Web Servers (Windows Server 2003). Of these 2 are 4 core machines and the remaining 2 are 8 core ones. .NET Framework 3.5 SP1 installed on all 4 web servers. Application hosted on IIS 6.0 run in Worker process isolation mode.

    Read the article

  • Drag and Drop to a Powershell script

    - by Nathan Hartley
    I thought I had an answer to this, but the more I play with it, the more I see it as a design flaw of Powershell. I would like to drag and drop (or use the Send-To mechanism) to pass multiple files and/or folders as a array to a Powershell script. Test Script #Test.ps1 param ( [string[]] $Paths, [string] $ExampleParameter ) "Paths" $Paths "args" $args I then created a shortcut with the following command line and dragged some files on to it. The files come across as individual parameters which first match the script parameters positionally, with the remainder being placed in the $args array. Shortcut Attempt 1 powershell.exe -noprofile -noexit -file c:\Test.ps1 I found that I can do this with a wrapper script... Wrapper Script #TestWrapper.ps1 & .\Test.ps1 -Paths $args Shortcut Attempt 2 powershell.exe -noprofile -noexit -file c:\TestWrapper.ps1 Has anyone found a way to do this without the extra script?

    Read the article

  • Perl, module export symbol

    - by Mike
    I'm having trouble understanding how to export a package symbol to a namespace. I've followed the documentation almost identically, but it seems to not know about any of the exporting symbols. mod.pm #!/usr/bin/perl package mod; use strict; use warnings; require Exporter; @ISA = qw(Exporter); @EXPORT=qw($a); our $a=(1); 1; test.pl $ cat test.pl #!/usr/bin/perl use mod; print($a); This is the result of running it $ ./test.pl Global symbol "@ISA" requires explicit package name at mod.pm line 10. Global symbol "@EXPORT" requires explicit package name at mod.pm line 11. Compilation failed in require at ./test.pl line 3. BEGIN failed--compilation aborted at ./test.pl line 3. $ perl -version This is perl, v5.8.4 built for sun4-solaris-64int

    Read the article

  • ie7 z-index problem

    - by rezna
    Hi, I've isolated a little test case of IE7's z-index bug, but don't know how to fix it. Have been playing with z-indeces all day long but it didn't. If someone knows, what to do about it, pls help ;) The test case is located here - http://upload.rezna.info/z-index-test.html The problem is, that in IE7 the second textbox is placed over the red list (suggest box). Thx, rezna

    Read the article

  • Something is wrong with this C code to reverse string, but I dont know what ? please help

    - by Harbhag
    Hi all, I am beginnner to programming. I wrote this little program to reverse a string. But if I try to reverse a string which is less than 5 characters long then it gives wrong output. I cant seem to find whats wrong. #include<stdio.h> #include<string.h> int main() { char test[50]; char rtest[50]; int i, j=0; printf("Enter string : "); scanf("%s", test); int max = strlen(test) - 1; for ( i = max; i>=0; i--) { rtest[j] = test[i]; j++; } printf("Reversal is : %s\n", rtest); return 0; }

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Maven GAE Plugin - Unable to run gae:debug

    - by Taylor L
    I'm having trouble running the gae:debug goal of the Maven GAE Plugin. The error I'm receiving is below. Any ideas? I'm running it with "mvn gae:debug". [INFO] Packaging webapp [INFO] Assembling webapp[test-gae] in [C:\development\test-gae\target\test-gae-0.0.1-SNAPSHOT] [INFO] Processing war project [INFO] Webapp assembled in[56 msecs] [INFO] Building war: C:\development\test-gae\target\test-gae-0.0.1-SNAPSHOT.war [INFO] [statemgmt:end-fork] [INFO] Ending forked execution [fork id: -2101914270] [INFO] [gae:debug] Usage: <dev-appserver> [options] <war directory> Options: --help, -h Show this help message and exit. --server=SERVER The server to use to determine the latest -s SERVER SDK version. --address=ADDRESS The address of the interface on the local machine -a ADDRESS to bind to (or 0.0.0.0 for all interfaces). --port=PORT The port number to bind to on the local machine. -p PORT --sdk_root=root Overrides where the SDK is located. --disable_update_check Disable the check for newer SDK versions. EDIT: gae:run with the jvmFlags option is also giving me the same result with the below configuration. <plugin> <groupId>net.kindleit</groupId> <artifactId>maven-gae-plugin</artifactId> <version>0.5.0</version> <configuration> <jvmFlags> <jvmFlag>-Xdebug</jvmFlag> <jvmFlag>-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000</jvmFlag> </jvmFlags> </configuration> </plugin>

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • Developmnet process for an embedded project with significant Hardware change

    - by pierr
    Hi, I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware change. I will describe below what we are currently doing (Ad-hoc way , no defined process yet). The change are divided to three categories and different process are used for them : complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the leagcy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real Hardware hardware improvement example : enhance the image display quaulity by improving the underlie algorithm a)RTL/FPGA simulation b)Wait until hardware and test on the hardware Mino change exmaple : only change hardware register mapping a)Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware change as the bring up schedule is always very tight and the customer desired a seemless change when updating to a new version hardware. How did you manage this kind of hardware hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatical test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well documented process for this kind of change? Thanks for your insight.

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >