Search Results

Search found 4893 results on 196 pages for 'expect'.

Page 137/196 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • String.Empty in strings, need some explanation if possible :)

    - by Pabuc
    Hello all, 2 days ago, there was a question related to string.LastIndexOf(String.Empty) returning the last index of string. So I thought that; a string can always contain string.empty between characters like: "testing" == "t" + String.Empty + "e" + String.Empty +"sting" + String.Empty; After this, I wanted to test if String.IndexOf(String.Empty) was returning 0 because since String.Empty can be between any char in a string, that would be what I expect it to return and I wasn't wrong. string testString = "testing"; int index = testString.LastIndexOf(string.Empty); // index is 6 index = testString.IndexOf(string.Empty); // index is 0 It actually returned 0. I started to think that if I could split a string with String.Empty, I would get at least 2 string and those would be String.Empty and rest of the string since String.IndexOf(String.Empty) returned 0 and String.LastIndexOf(String.Empty) returned length of the string.. Here is what I coded: string emptyString = string.Empty; char[] emptyStringCharArr = emptyString.ToCharArray(); string myDummyString = "abcdefg"; string[] result = myDummyString.Split(emptyStringCharArr); The problem here is, I can't obviously convert String.Empty to char[] and result in an empty string[]. I would really love to see the result of this operation and the reason behind this. So my questions are: Is there any way to split a string with String.Empty? If it is not possible but in an absolute world which it would be possible, would it return an array full of chars like [0] = "t" [1] = "e" [2] = "s" and so on or would it just return the complete string? Which would make more sense and why? Thank you for your time.

    Read the article

  • LINQ-to-SQL IN/Contains() for Nullable<T>

    - by Craig Walker
    I want to generate this SQL statement in LINQ: select * from Foo where Value in ( 1, 2, 3 ) The tricky bit seems to be that Value is a column that allows nulls. The equivalent LINQ code would seem to be: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); var myFoos = from foo in foos where values.Contains(foo.Value) select foo; This, of course, doesn't compile, since foo.Value is an int? and values is typed to int. I've tried this: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); IEnumerable<int?> nullables = values.Select( value => new Nullable<int>(value)); var myFoos = from foo in foos where nullables.Contains(foo.Value) select foo; ...and this: IEnumerable<Foo> foos = MyDataContext.Foos; IEnumerable<int> values = GetMyValues(); var myFoos = from foo in foos where values.Contains(foo.Value.Value) select foo; Both of these versions give me the results I expect, but they do not generate the SQL I want. It appears that they're generating full-table results and then doing the Contains() filtering in-memory (ie: in plain LINQ, without -to-SQL); there's no IN clause in the DataContext log. Is there a way to generate a SQL IN for Nullable types?

    Read the article

  • Why does the the Java VM not recover after "Too many open files" errors?

    - by Michael
    In certain well-understood circumstances, our application will open too many sockets (database connections) and reach the maximum open files that the OS allows. We understand this; we are fixing the issue and also bumping up the limit. What we can't explain is why parts of our application don't recover even after the number of connections abates and we're well within the limit. In this case, it's an application running under Tomcat. When this happens, we first start seeing "Too many open files" errors: SEVERE: Socket accept failed java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390) at java.net.ServerSocket.implAccept(ServerSocket.java:453) at java.net.ServerSocket.accept(ServerSocket.java:421) at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61) at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:310) at java.lang.Thread.run(Thread.java:619) Eventually, we start seeing NoClassDefFoundErrors inside an application thread that's trying to open HTTP connections: java.lang.NoClassDefFoundError: org/apache/commons/httpclient/protocol/ControllerThreadSocketFactory at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:128) at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707) at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1349) [...] Caused by: java.lang.ClassNotFoundException: org.apache.commons.httpclient.protocol.ControllerThreadSocketFactory at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1387) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1233) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 8 more When the errant connections go away, the server starts accepting connections again, and everything seems ok, but we're left with the latter error constantly being spewed to stderr. Although the application typically logs unloaded classes to stdout, I don't see any such logs just before, during or after the "Too many open files" errors. My initial theory was that the Hotspot JVM would unload seemingly unused classes when it encounters "Too many open files," but if so, it doesn't log the fact. I'd also expect it to recover if that were the case. Platform details: Java(TM) SE Runtime Environment (build 1.6.0_14-b08) Java HotSpot(TM) 64-Bit Server VM (build 14.0-b16, mixed mode) Apache Tomcat Version 6.0.18

    Read the article

  • Business Logic Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The app has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others. Have in mind that a callback on SalesOrder is not enough, since it depends on where it is called (no field for that), depends on the class of the user, among other stuff not visible for the model, or in some cases it would take long for the model to process.

    Read the article

  • Mocking WebResponse's from a WebRequest

    - by Rob Cooper
    I have finally started messing around with creating some apps that work with RESTful web interfaces, however, I am concerned that I am hammering their servers every time I hit F5 to run a series of tests.. Basically, I need to get a series of web responses so I can test I am parsing the varying responses correctly, rather than hit their servers every time, I thought I could do this once, save the XML and then work locally. However, I don't see how I can "mock" a WebResponse, since (AFAIK) they can only be instantiated by WebRequest.GetResponse How do you guys go about mocking this sort of thing? Do you? I just really don't like the fact I am hammering their servers :S I dont want to change the code too much, but I expect there is a elegant way of doing this.. Update Following Accept Will's answer was the slap in the face I needed, I knew I was missing a fundamental point! Create an Interface that will return a proxy object which represents the XML. Implement the interface twice, on that uses WebRequest, the other that returns static "responses". The interface implmentation then either instantiates the return type based on the response, or the static XML. You can then pass the required class when testing or at production to the service layer. Once I have the code knocked up, I'll paste some samples. Thanks Will :)

    Read the article

  • Using Entity Framework with an SQL Compact Private Installation

    - by David Veeneman
    I am using Entity Framework 4 in a desktop application with SQL Compact. I want to use a private installation of SQL Compact with my application, so that my installer can install SQL Compact without giving the user a second installation to do. It also avoids versioning hassles down the road. My development machine has SQL Compact 3.5 SP1 installed as a public installation, so my app runs fine there, as one would expect. But it's not running on my test machine, which does not have SQL Compact installed. I get this error: The specified store provider cannot be found in the configuration, or is not valid. I know some people have had difficulty with SQL Compact private installations, but I have used them for a while, and I really like them. Unfortunately, my regular private installation approach isn't working. I have checked the version numbers on my SQL CE files, and they are all 3.8.8078.0, which is the SP2 RC version. Here are the files I have included in my private installation: sqlcecompact35.dll sqlceer35EN.dll sqlceme35.dll sqlceqp35.dll sqlcese35.dll System.Data.SqlServerCe.dll System.Data.SqlServerCe.Entity.dll I have added a reference to System.Data.SqlServerCe to my project, and I have verified that all of the files listed above are being copied to the application folder on the installation machine. Here is the code I use to configure an EntityConnectionStringBuilder when I open a SQL Compact file: var sqlCompactConnectionString = string.Format("Data Source={0}", filePath); // Set Builder properties builder.Metadata = string.Format("res://*/{0}.csdl|res://*/{0}.ssdl|res://*/{0}.msl", edmName); builder.Provider = "System.Data.SqlServerCe.3.5"; builder.ProviderConnectionString = sqlCompactConnectionString; var edmConnectionString = builder.ToString(); Am I missing a file? Am I missing a configuration stepp needed to tell Entity Framework where to find my SQL Compact DLLs? Any other suggestions why EF isn't finding my SQL Compact DLLs on the installation machine? Thanks for your help.

    Read the article

  • Trying to Create an Enterprise Provisioning Profile--Distribution Tab is Missing

    - by El' Jocko
    I'm attempting to create an Enterprise Provisioning Profile, but the program portal isn't working the way I expect it to (based on the documentation). Here's what I'm seeing: First, I log on as the Team Agent. Second, I navigate to the Provisioning section of the program portal. According to the documentation, I should see 4 tabs at this point: Development, Distribution, History, How To. But when I do this, the Distribution tab isn't there. And without the Distribution tab, I can't create a Enterprise Provisioning Profile (or an Ad Hoc one, for that matter). I've tried contacting Apple administrative support, but they weren't able to provide any help with this. They suggested that I try the forum. :-) Just to fill in a few more details: My company has an enterprise membership in the developer program. And when I navigate to the Certificates section, there is a Distribution tab. I figure that there must be something that I've done incorrectly. Has anyone had a similar experience? Any suggestions on what to try next?

    Read the article

  • Why is my Android app camera preview running out of memory on my AVD?

    - by Bryan
    I have yet to try this on an actual device, but expect similar results. Anyway, long story short, whenever I run my app on the emulator, it crashes due to an out of memory exception. My code really is essentially the same as the camera preview API demo from google, which runs perfectly fine. The only file in the app (that I created/use) is as below- package berbst.musicReader; import java.io.IOException; import android.app.Activity; import android.content.Context; import android.hardware.Camera; import android.os.Bundle; import android.view.SurfaceHolder; import android.view.SurfaceView; /********************************* * Music Reader v.0001 * Still VERY under construction. * @author Bryan * *********************************/ public class MusicReader extends Activity { private MainScreen main; @Override //Begin activity public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); main = new MainScreen(this); setContentView(main); } class MainScreen extends SurfaceView implements SurfaceHolder.Callback { SurfaceHolder sHolder; Camera cam; MainScreen(Context context) { super(context); //Set up SurfaceHolder sHolder = getHolder(); sHolder.addCallback(this); sHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceCreated(SurfaceHolder holder) { // Open the camera and start viewing cam = Camera.open(); try { cam.setPreviewDisplay(holder); } catch (IOException exception) { cam.release(); cam = null; } } public void surfaceDestroyed(SurfaceHolder holder) { // Kill all our crap with the surface cam.stopPreview(); cam.release(); cam = null; } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { // Modify parameters to match size. Camera.Parameters params = cam.getParameters(); params.setPreviewSize(w, h); cam.setParameters(params); cam.startPreview(); } } }

    Read the article

  • To GAC, or not to GAC?

    - by Jagd
    I have a data access layer (DAL) that is written in ASP.NET 3.5 and uses the Microsoft patterns & practices libraries (hereafter referred to as P&P) in order to accomplish its data access. I installed P&P and it resides in my GAC, so, logically, my DAL references it in the GAC. Therefore, the P&P libraries are never pulled down to the bin folder of my DAL. I use this DAL project in at least five (more than that even, but I'm too lazy to try to count them all) different websites. And this has all worked just fine for me because I'm the only developer who works on these websites. But, now I have other developers who are going to work on some of these websites. The problem: if a developer pulls the DAL project down from our code repository, it won't build for them if they don't have the P&P libraries installed. My question: should I expect the developers to install the P&P libraries, or should I just dump them in the bin folder and be done with it? I realize that dumping them into the bin folder is probably the easiest way to deal with the problem, but I've never been a big fan of the bin folder if I can reference them in the GAC instead.

    Read the article

  • XML Schema Migration

    - by Corwin Joy
    I am working on a project where we need to save data in an XML format. The problem is, over time we expect the format / schema for our data to change. What we want to be able to do is to produce scripts to migrate our data across different schema versions. We distribute our product to thousands of customers so we need to be able to run / apply these scripts at customer sites (so we can't just do the conversions by hand). I think that what we are looking for is some kind of XML data migration tool. In my mind the ideal tool could: Do an "XML diff" of two schema to identify added/deleted/changed nodes. Allow us to specify transformation functions. So, for example, we might add a new element to our schema that is a function of the old elements. (E.g. a new element C where C = A+B, A + B are old elements). So I think I am looking for a kind of XML diff and patch tool which can also apply transformation functions. One tool I am looking at for this is Altova's MapForce . I'm sure others here have had to deal with XML data format migration. How did you handle it? Edit: One point of clarification. The "diff" I plan to do is on the schema or .xsd files. The actual changes will be made to particular data sets that follow a given schema. These data sets will be .xml files. So its a "diff" of the schema to help figure out what changes need to be made to data sets to migrate them from one scheme to another.

    Read the article

  • Settings variable values in a Moq Callback() call

    - by Adam Driscoll
    I think I may be a bit confused on the syntax of the Moq Callback methods. When I try to do something like this: IFilter filter = new Filter(); List<IFoo> objects = new List<IFoo> { new Foo(), new Foo() }; IQueryable myFilteredFoos = null; mockObject.Setup(m => m.GetByFilter(It.IsAny<IFilter>())).Callback( (IFilter filter) => myFilteredFoos = filter.FilterCollection(objects)).Returns(myFilteredFoos.Cast<IFooBar>()); This throws a exception because myFilteredFoos is null during the Cast<IFooBar>() call. Is this not working as I expect? I would think FilterCollection would be called and then myFilteredFoos would be non-null and allow for the cast. FilterCollection is not capable of returning a null which draws me to the conclusion it is not being called. Also, when I declare myFilteredFoos like this: Queryable myFilteredFoos; The Return call complains that myFilteredFoos may be used before it is initialized.

    Read the article

  • Creating custom IP-STS for sharepoint foundation 2010 without ADFS

    - by user252229
    I plan to create very simple custom IP-STS for SharePoint foundation 2010 without ADFS server so anyone can integrate Windows Live ID to SharePoint foundation 2010 simply without ADFS, I can't use ADFS server because it could not install on Windows Web Server 2008 (Web Edition), also I found many article use LDAP provider but it does not exists in SharePoint Foundation too (it requires Sharepoint Server Edition). After too much searching I just found the following article and find all technique except one problem. 1) Creating Custom Claim Provider: blogs.technet.com/b/speschka/archive/2010/03/13/writing-a-custom-claims-provider-for-sharepoint-2010-part-1.aspx 2) Creating Custom STS Provider: http://blogs.msdn.com/b/chunliu/archive/2010/04/02/how-to-make-use-of-a-custom-ip-sts-with-sharepoint-2010-part-1.aspx Only one step remains: I got following error after enter username in STS site and redirect to localhost/_trust/default.aspx , ( I leave EncryptingCertificateName empty). Operation is not valid due to the current state of the object I expect to get access denied error instead of that error. 1.Is it possible anyway? 2.Can anyone help me where can I find working article to create custom IP-STS without ADFS server Any idea will help me Thanks

    Read the article

  • How should I handle persistence in a Java MUD? OptimisticLockException handling

    - by Chase
    I'm re-implementing a old BBS MUD game in Java with permission from the original developers. Currently I'm using Java EE 6 with EJB Session facades for the game logic and JPA for the persistence. A big reason I picked session beans is JTA. I'm more experienced with web apps in which if you get an OptimisticLockException you just catch it and tell the user their data is stale and they need to re-apply/re-submit. Responding with "try again" all the time in a multi-user game would make for a horrible experience. Given that I'd expect several people to be targeting a single monster during a fight I think the chance of an OptimisticLockException would be high. My view code, the part presenting a telnet CLI, is the EJB client. Should I be catching the PersistenceExceptions and TransactionRolledbackLocalExceptions and just retrying? How do you decide when to stop? Should I switch to pessimistic locking? Is persisting after every user command overkill? Should I be loading the entire world in RAM and dumping the state every couple of minutes? Do I make my session facade a EJB 3.1 singleton which would function as a choke point and therefore eliminating the need to do any type of JPA locking? EJB 3.1 singletons function as a multiple reader/single writer design (you annotate the methods as readers and writers). Basically, what is the best design and java persistence API for highly concurrent data changes in an application where it is not acceptable to present resubmit/retry prompts to the user?

    Read the article

  • Consume webservice from a .NET DLL - app.config problem

    - by Asaf R
    Hi, I'm building a DLL, let's call it mydll.dll, and in it I sometimes need to call methods from webservice, myservice. mydll.dll is built using C# and .NET 3.5. To consume myservice from mydll I've Added A Service in Visual Studio 2008, which is more or less the same as using svcutil.exe. Doing so creates a class I can create, and adds endpoint and bindings configurations to mydll app.config. The problem here is that mydll app.config is never loaded. Instead, what's loaded is the app.config or web.config of the program I use mydll in. I expect mydll to evolve, which is why I've decoupled it's funcionality from the rest of my system to begin with. During that evolution it will likely add more webservice to which it'll call, ruling out manual copy-paste ways to overcome this problem. I've looked at several possible approaches to attacking this issue: Manually copy endpoints and bindings from mydell app.config to target EXE or web .config file. Couples the modules, not flexible Include endpoints and bindings from mydll app.config in target .config, using configSource (see here). Also add coupling between modules Programmatically load mydll app.config, read endpoints and bindings, and instantiate Binding and EndpointAddress. Use a different tool to create local frontend for myservice I'm not sure which way to go. Option 3 sounds promising, but as it turns out it's a lot of work and will probably introduce several bugs, so it doubtfully pays off. I'm also not familiar with any tool other than the canonical svcutil.exe. Please either give pros and cons for the above alternative, provide tips for implementing any of them, or suggest other approaches. Thanks, Asaf

    Read the article

  • Create System.Data.Linq.Table in Code for Testing

    - by S. DePouw
    I have an adapter class for Linq-to-Sql: public interface IAdapter : IDisposable { Table<Data.User> Activities { get; } } Data.User is an object defined by Linq-to-Sql pointing to the User table in persistence. The implementation for this is as follows: public class Adapter : IAdapter { private readonly SecretDataContext _context = new SecretDataContext(); public void Dispose() { _context.Dispose(); } public Table<Data.User> Users { get { return _context.Users; } } } This makes mocking the persistence layer easy in unit testing, as I can just return whatever collection of data I want for Users (Rhino.Mocks): Expect.Call(_adapter.Users).Return(users); The problem is that I cannot create the object 'users' since the constructors are not accessible and the class Table is sealed. One option I tried is to just make IAdapter return IEnumerable or IQueryable, but the problem there is that I then do not have access to the methods ITable provides (e.g. InsertOnSubmit()). Is there a way I can create the fake Table in the unit test scenario so that I may be a happy TDD developer?

    Read the article

  • Nonblocking Tcp server

    - by hoodoos
    It's not a question really, i'm just looking for some guidelines :) I'm currently writing some abstract tcp server which should use as low number of threads as it can. Currently it works this way. I have a thread doing listening and some worker threads. Listener thread is just sits and wait for clients to connect I expect to have a single listener thread per server instance. Worker threads are doing all read/write/processing job on clients socket. So my problem is in building efficient worker process. And I came to some problem I can't really solve yet. Worker code is something like that(code is really simple just to show a place where i have my problem): List<Socket> readSockets = new List<Socket>(); List<Socket> writeSockets = new List<Socket>(); List<Socket> errorSockets = new List<Socket>(); while( true ){ Socket.Select( readSockets, writeSockets, errorSockets, 10 ); foreach( readSocket in readSockets ){ // do reading here } foreach( writeSocket in writeSockets ){ // do writing here } // POINT2 and here's the problem i will describe below } it works all smothly accept for 100% CPU utilization because of while loop being cycling all over again, if I have my clients doing send-receive-disconnect routine it's not that painful, but if I try to keep alive doing send-receive-send-receive all over again it really eats up all CPU. So my first idea was to put a sleep there, I check if all sockets have their data send and then putting Thread.Sleep in POINT2 just for 10ms, but this 10ms later on produces a huge delay of that 10ms when I want to receive next command from client socket.. For example if I don't try to "keep alive" commands are being executed within 10-15ms and with keep alive it becomes worse by atleast 10ms :( Maybe it's just a poor architecture? What can be done so my processor won't get 100% utilization and my server to react on something appear in client socket as soon as possible? Maybe somebody can point a good example of nonblocking server and architecture it should maintain?

    Read the article

  • Session variables with Cucumber Stories

    - by Matthew Savage
    I am working on some Cucumber stories for a 'sign up' application which has a number of steps. Rather then writing a Huuuuuuuge story to cover all the steps at once, which would be bad, I'd rather work through each action in the controller like a regular user. My problem here is that I am storing the account ID which is created in the first step as a session variable, so when step 2, step 3 etc are visited the existing registration data is loaded. I'm aware of being able to access controller.session[..] within RSpec specifications however when I try to do this in Cucumber stories it fails with the following error (and, I've also read somewhere this is an anti-pattern etc...): Using controller.session[:whatever] or session[:whatever] You have a nil object when you didn't expect it! The error occurred while evaluating nil.session (NoMethodError) Using session(:whatever) wrong number of arguments (1 for 0) (ArgumentError) So, it seems accession the session store isn't really possible. What I'm wondering is if it might be possible to (and I guess which would be best..): Mock out the session store etc Have a method within the controller and stub that out (e.g. get_registration which assigns an instance variable...) I've looked through the RSpec book (well, skimmed) and had a look through WebRat etc, but I haven't really found an answer to my problem... To clarify a bit more, the signup process is more like a state machine - e.g. the user progresses through four steps before the registration is complete - hence 'logging in' isn't really an option (it breaks the model of how the site works)... In my spec for the controller I was able to stub out the call to the method which loads the model based on the session var - but I'm not sure if the 'antipattern' line also applies to stubs as well as mocks? Thanks!

    Read the article

  • PHP SoapClient() function returning a single XML string

    - by gjb
    I am having difficulty with the PHP SoapClient() function. The SOAP request is successful, but the response is returned as an object containing a single XML string with the key "any". For example: <?php $params = array('strUsername' => 'Test', 'strPassword' => 'Test'); $client=new SoapClient('http://www.example.com/webservice.asmx?wsdl', array('features' => SOAP_SINGLE_ELEMENT_ARRAYS)); $result = $client->strExampleCall($params); print_r($result); ?> This outputs the following: stdClass Object ( [strExampleCallResult] => stdClass Object ( [any] => <Response xmlns="" release="1.0.0" environment="Production" lang="en-GB"><ApplicationArea><Sender><SenderId>0</SenderId><ReferenceId>0</ReferenceId></Sender><Destination><DestinationId>1</DestinationId></Destination></ApplicationArea><DataArea><Result>1</Result></DataArea></Response> ) ) Subsequently, I cannot access properties of the object as I'd expect to: echo $result->strExampleCallResult->Response->DataArea->Result; Why isn't PHP parsing the SOAP response into properties of the returned object? I am using PHP 5.3.0 and believe the SOAP server is running .NET. Any suggestions will be much appreciated.

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • Facebook with RestFB (Java)

    - by Trick
    I just began to use this and I already stumbled on some from-my-side-strange errors. I am using RestFB jar. My problem is, that I can not get my session key (this is the start of everything :)). FacebookClient facebookClient = new DefaultFacebookClient(API_KEY, SECRET_KEY); try { String token = facebookClient.execute("auth.createToken", String.class, Parameter.with("null", "null")); System.out.println(token); String session = facebookClient.execute("auth.getSession", String.class, Parameter.with("auth_token", token)); System.out.println(session); } catch (FacebookException e) { e.printStackTrace(); } I get the token correctly. Parameter.with("null,"null") is there because it demands at least one, but method for creating token doesn't expect any. When trying to get the session key, I get the following error: com.restfb.FacebookResponseStatusException: Received Facebook error response (code 100): Invalid parameter at com.restfb.DefaultFacebookClient.throwFacebookResponseStatusExceptionIfNecessary(DefaultFacebookClient.java:357) at com.restfb.DefaultFacebookClient.makeRequest(DefaultFacebookClient.java:320) at com.restfb.DefaultFacebookClient.execute(DefaultFacebookClient.java:188) at com.restfb.DefaultFacebookClient.execute(DefaultFacebookClient.java:178) Documentation for getting session doesn't say any more parameters are required! Did anybody already try this JAR or do you have any other solution for Java? EDIT: Does the application needs to be "published" in searchers to be able to do this??

    Read the article

  • LookAhead Regex in .Net - unexpected result

    - by AaronM
    Hello, I am a bit puzzled with my Regex results (and still trying to get my head around the syntax). I have been using http://regexpal.com/ to test out my expression, and its works as intended there, however in C# its not as expected. Here is a test - an expression of the following: (?=<open>).*?(?=</open>) on an input string of: <open>Text 1 </open>Text 2 <open>Text 3 </open>Text 4 <open>Text 5 </open> I would expect a result back of <open>Text1 <open>Text 2 <open>Text 3... etc However when I do this in C# it only returns the first match of <open>Text1 How do I get all five 'results' back from the Regex? Regex exx = new Regex("(?=<open>).*?(?=</open>)", RegexOptions.IgnoreCase | RegexOptions.Singleline); string input = "<open>Text 1</open> Text 2 <open> Text 3 </open> Text 4 <open> Text 5 </open>"; string result = Regex.Match(input, exx.ToString(), exx.Options).ToString();

    Read the article

  • Rails - difference between config.cache_store and config.action_controller.cache_store?

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • Reloading Sinatra app on every request on Windows

    - by Darth
    I've set up Rack::Reload according to this thread # config.ru require 'rubygems' require 'sinatra' set :environment, :development require 'app' run Sinatra::Application # app.rb class Sinatra::Reloader < Rack::Reloader def safe_load(file, mtime, stderr = $stderr) if file == Sinatra::Application.app_file ::Sinatra::Application.reset! stderr.puts "#{self.class}: reseting routes" end super end end configure(:development) { use Sinatra::Reloader } get '/' do 'foo' end Running with thin via thin start -R config.ru, but it only reloads newly added routes. When I change already existing route, it still runs the old code. When I add new route, it correctly reloads it, so it is accessible, but it doesn't reload anything else. For example, if I changed routes to get '/' do 'bar' end get '/foo' do 'baz' end Than / would still serve foo, even though it has changed, but /foo would correctly reload and serve baz. Is this normal behavior, or am I missing something? I'd expect whole source file to be reloaded. The only way around I can think of right now is restarting whole webserver when filesystem changes. I'm running on Windows Vista x64, so I can't use shotgun because of fork().

    Read the article

  • End-to-end kerberos delegated authentication in ASP.NET

    - by Erlend
    I'm trying to setup an internal website that will contact another backend service within the network on behalf of the user using a HttpWebRequest. I have to use Integrated Windows Authentication on the ASP.NET application as the backend system only supports this type of authentication. I'm able to setup IWA on the ASP.NET application, and it's using kerberos as I expect it to. However when the authentication is delegated to the backend system it doesn't work anymore. This is because the backend system only supports kerberos IWA, but the delegation for some reason - even though the incoming request is kerberos authenticated - converts the authentication to NTLM before forwaring to the backend system. Does anybody know what I need to do on the ASP.NET application in order to allow it to forward the identity using kerberos? I've currently tried the followin but it doesn't seem to work CredentialCache credentialCache = new CredentialCache(); credentialCache.Add(request.RequestUri, "Negotiate", CredentialCache.DefaultCredentials.GetCredential(request.RequestUri, "Kerberos")); request.Credentials = credentialCache; I've also tried to set "Kerberos" where it now says "Negotiate", but it doesn't seem to do much.

    Read the article

  • Why is my Lucene index getting locked?

    - by Andrew Bullock
    I had an issue with my search not return the results I expect. I tried to run Luke on my index, but it said it was locked and I needed to Force Unlock it (I'm not a Jedi/Sith though) I tried to delete the index folder and run my recreate-indicies application but the folder was locked. Using unlocker I've found that there are about 100 entries of w3wp.exe (same PID, different Handle) with a lock on the index. Whats going on? I'm doing this in my NHibernate configuration: c.SetListener(ListenerType.PostUpdate, new FullTextIndexEventListener()); c.SetListener(ListenerType.PostInsert, new FullTextIndexEventListener()); c.SetListener(ListenerType.PostDelete, new FullTextIndexEventListener()); And here is the only place i query the index: var fullTextSession = NHibernate.Search.Search.CreateFullTextSession(this.unitOfWork.Session); var fullTextQuery = fullTextSession.CreateFullTextQuery(query, typeof (Person)); fullTextQuery.SetMaxResults(100); return fullTextQuery.List<Person>(); Whats going on? What am i doing wrong? Thanks

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >