Search Results

Search found 22951 results on 919 pages for 'debug build'.

Page 868/919 | < Previous Page | 864 865 866 867 868 869 870 871 872 873 874 875  | Next Page >

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • Ladder-like word game in Java

    - by sasquatch90
    I've found this question http://stackoverflow.com/questions/2844190/choosing-design-method-for-ladder-like-word-game and I would also like to do this kind of program. I've written some code but already have two issues. Here's what I already have : GRID : public class Grid { public Grid(){} public Grid( Element e ){} } ELEMENT : public class Element { final int INVISIBLE = 0; final int EMPTY = 1; final int FIRST_LETTER = 2; final int OTHER_LETTER = 3; private int state; private String letter; public Element(){} //empty block public Element(int state){ this("", 0); } //filled block public Element(String s, int state){ this.state = state; this.letter = s; } public static void changeState(int s){ } public int getState(){ return state; } public boolean equalLength(){ return true; } public boolean equalValue(){ return true; } @Override public String toString(){ return "["+letter+"]"; } } MAIN: import java.util.Scanner; public class Main { public static void main(String[] args){ Scanner sc = new Scanner(System.in); System.out.println("Height: "); while (!sc.hasNextInt()) { System.out.println("int, please!"); sc.next(); } final int height = sc.nextInt(); Grid[] game = new Grid[height]; for(int i = 1; i <= height; i++) { String s; do { System.out.println("Length " + i + ", please!"); s = sc.next(); } while (s.length() != i); Element[] line = new Element[s.length()+1]; Element single = null; String[] temp = null; //issue here temp = s.split(""); System.out.println("s.length: "+s.length()); System.out.println("temp.length: "+temp.length); // for(String str : temp){ System.out.println("str:"+str); } for (int k = 0 ; k < temp.length ; k++) { if( k == 0 ){ single = new Element(temp[k], 2); System.out.println("single1: "+single); } else{ single = new Element(temp[k], 3); System.out.println("single2: "+single); } line[k] = single; } for (Element l : line) { System.out.println("line:"+l); } //issue here game[i] = line; } // for (Grid g : game) { System.out.println(g); } } } And sample output for debug : Height: 3 Length 1, please! A s.length: 1 temp.length: 2 str: str:A single1: [] single2: [A] line:[] line:[A] Here's what I think it should work like. I grab a word from user. Next create Grid element for whole game. Then for each line I create Element[] array called line. I split the given text and here's the first problem. Why string.split() adds a whitespace ? You can see clearly in output that it is added for no reason. How can I get rid of it (now I had to add +1 to the length of line just to run the code). Continuing I'm throwing the splitted text into temporary String array and next from each letter I create Element object and throw it to line array. Apart of this empty space output looks fine. But next problem is with Grid. I've created constructor taking Element as an argument, but still I can't throw line as Grid[] elements because of 'incompatible types'. How can I fix that ? Am I even doing it right ? Maybe I should get rid of line as Element[] and just create Grid[][] ?

    Read the article

  • Why am I getting a new session ID on every page fetch in my Perl WWW::Mechanize script?

    - by Phill Pafford
    So I'm scraping a site that I have access to via HTTPS, I can login and start the process but each time I hit a new page (URL) the cookie Session Id changes. How do I keep the logged in Cookie Session Id? #!/usr/bin/perl -w use strict; use warnings; use WWW::Mechanize; use HTTP::Cookies; use LWP::Debug qw(+); use HTTP::Request; use LWP::UserAgent; use HTTP::Request::Common; my $un = 'username'; my $pw = 'password'; my $url = 'https://subdomain.url.com/index.do'; my $agent = WWW::Mechanize->new(cookie_jar => {}, autocheck => 0); $agent->{onerror}=\&WWW::Mechanize::_warn; $agent->agent('Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.3) Gecko/20100407 Ubuntu/9.10 (karmic) Firefox/3.6.3'); $agent->get($url); $agent->form_name('form'); $agent->field(username => $un); $agent->field(password => $pw); $agent->click("Log In"); print "After Login Cookie: "; print $agent->cookie_jar->as_string(); print "\n\n"; my $searchURL='https://subdomain.url.com/search.do'; $agent->get($searchURL); print "After Search Cookie: "; print $agent->cookie_jar->as_string(); print "\n"; The output: After Login Cookie: Set-Cookie3: JSESSIONID=367C6D; path="/thepath"; domain=subdomina.url.com; path_spec; secure; discard; version=0 After Search Cookie: Set-Cookie3: JSESSIONID=855402; path="/thepath"; domain=subdomain.com.com; path_spec; secure; discard; version=0 Also I think the site requires a CERT (Well in the browser it does), would this be the correct way to add it? $ENV{HTTPS_CERT_FILE} = 'SUBDOMAIN.URL.COM'; ## Insert this after the use HTTP::Request... Also for the CERT In using the first option in this list, is this correct? X.509 Certificate (PEM) X.509 Certificate with chain (PEM) X.509 Certificate (DER) X.509 Certificate (PKCS#7) X.509 Certificate with chain (PKCS#7)

    Read the article

  • Java Http request on api.stackexchange with json response

    - by taoder
    I'm trying to use the api-stackexchange with java but when I do the request and try to parse the response with a json parser I have an error. public ArrayList<Question> readJsonStream(InputStream in) throws IOException { JsonReader reader = new JsonReader(new InputStreamReader(in, "UTF-8")); reader.setLenient(true); try { System.out.println(reader.nextString()); // ? special character return readItem(reader); } finally { reader.close(); } } public ArrayList<Question> readItem(JsonReader reader) throws IOException { ArrayList<Question> questions = new ArrayList<Question>(); reader.beginObject(); while (reader.hasNext()) { System.out.println("here");//not print the error is before String name = reader.nextName(); if (name.equals("items")) { questions = readQuestionsArray(reader); } } reader.endObject(); return questions; } public final static void main(String[] args) throws Exception { URIBuilder builder = new URIBuilder(); builder.setScheme("http").setHost("api.stackexchange.com").setPath("/2.0/search") .setParameter("site", "stackoverflow") .setParameter("intitle" ,"workaround") .setParameter("tagged","javascript"); URI uri = builder.build(); String surl = fixEncoding(uri.toString()+"&filter=!)QWRa9I-CAn0PqgUwq7)DVTM"); System.out.println(surl); Test t = new Test(); try { URL url = new URL(surl); t.readJsonStream(url.openStream()); } catch (IOException e) { e.printStackTrace(); } } And the error is: com.google.gson.stream.MalformedJsonException: Expected literal value at line 1 column 19 Here is an example of the Json : { "items": [ { "question_id": 10842231, "score": 0, "title": "How to push oath token to LocalStorage or LocalSession and listen to the Storage Event? (SoundCloud Php/JS bug workaround)", "tags": [ "javascript", "javascript-events", "local-storage", "soundcloud" ], "answers": [ { "question_id": 10842231, "answer_id": 10857488, "score": 0, "is_accepted": false } ], "link": "http://stackoverflow.com/questions/10842231/how-to-push-oath-token-to-localstorage-or-localsession-and-listen-to-the-storage", "is_answered": false },... So what's the problem? Is the Json really malformed? Or did I do something not right? Thanks, Anthony

    Read the article

  • Anyone have experience calling Rake from MSBuild for code gen and other benefits? How did it go? Wha

    - by Charlie Flowers
    While programming in C# using Visual Studio 2008, I often wish for "automatic" code generation. If possible, I'd like to achieve it by making my MSBuild solution file call out to Rake, which would call Ruby code for the code generation, having the resulting generated files automatically appear in my solution. Here's one business example (of many possible examples I could name) where this kind of automatic code generation would be helpful. In a recent project I had an interface with some properties that contained dollar amounts. I wanted a second interface and a third interface that had the same properties as the first interface, except they were "qualified" with a business unit name. Something like this: public interface IQuarterlyResults { double TotalRevenue { get; set; } double NetProfit { get; set; } } public interface IConsumerQuarterlyResults { double ConsumerTotalRevenue { get; set; } double ConsumerNetProfit { get; set; } } public interface ICorporateQuarterResults { double CorporateTotalRevenue { get; set; } double CorporateNetProfit { get; set; } } In this example, there is a "Consumer Business Unit" and a "Corporate Business Unit". Every property on IQuarterlyResults becomes a property called "Corporate" + [property name] on ICorporateQuarterlyResults, and likewise for IConsumerQuarterlyResults. Why make interfaces for these, rather than merely having an instance of IQuarterlyResults for Consumer and another instance for Corporate? Because, when working with the calculator object I was building, the user had to deal with 100's of properties, and it is much less confusing if he can deal with "fully qualified" property names such as "ConsumerNetProfit". But let's not get bogged down in this example. It is only an example and not the main question. The main question is this: I love using Ruby and ERB for code generation, and I love using Rake to manage dependencies between tasks. To solve the problem above, what I'd like to do is have MSBuild call out to Rake, and have Rake / Ruby read the list of properties on the "core" interface and then generate the code to make all the dependent interfaces and their properties. This would get triggered every time I do a build, because I'd put it into the MSBuild file for the VS.NET solution. Has anyone tried anything like this? How did it work out for you? What insights can you share about pros, cons, tips for success, etc.? Thanks!

    Read the article

  • CSS RGBA border / background alpha double

    - by stockli
    I'm working on a website that has a lot of transparency involved, and I thought I would try to build it entirely in RGBA and then do fallbacks for IE. I need a "facebox" style border effect, where the outer border is rounded and is less opaque than the background of the box it surrounds. The last example from http://24ways.org/2009/working-with-rgba-colour seems to suggest that it's possible, but I can't seem to get it to work. When I try the following: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>RGBA Test</title> <style type='text/css'> body { background: #000; color: #fff; } #container { width: 700px; margin: 0 auto; background: rgba(255, 255, 255, 0.2); border: 10px solid rgba(255, 255, 255, 0.1); padding: 20px; } </style> </head> <body> <div id='container'> This should look like a facebox. </div> </body></html> It seems like the background "extends" underneath the border of the element, which causes the pixel values to get added together. Thus, when both the background and the border are semi-transparent, the border will ALWAYS be more opaque than the background of the element. This is exactly the opposite of what I am trying to achieve, but it seems like it should be possible based on the examples I've seen. I should also add that I can't use another element inside the container, because I'm also going to use a border-radius on the container to get rounded corners, and webkit squares the corners of the child elements if they have a background assigned, which would essentially mean a rounded outer border with square contents. Sorry I can't post an image of this... Apparently I don't have enough rep to post an image.

    Read the article

  • Autoconf -- including a static library (newbie)

    - by EB
    I am trying to migrate my application from manual build to autoconf, which is working very nicely so far. But I have one static library that I can't figure out how to integrate. That library will NOT be located in the usual library locations - the location of the binary (.a file) and header (.h file) will be given as a configure argument. (Notably, even if I move the .a file to /usr/lib or anywhere else I can think of, it still won't work.) It is also not named traditionally (it does not start with "lib" or "l"). Manual compilation is working with these (directory is not predictable - this is just an example): gcc ... -I/home/john/mystuff /home/john/mystuff/helper.a (Uh, I actually don't understand why the .a file is referenced directly, not with -L or anything. Yes, I have a half-baked understanding of building C programs.) So, in my configure.ac, I can use the relevant configure argument to successfully find the header (.h file) using AC_CHECK_HEADER. Inside the AC_CHECK_HEADER I then add the location to CPFLAGS and the #include of the header file in the actual C code picks it up nicely. Given a configure argument that has been put into $location and the name of the needed files are helper.h and helper.a (which are both in the same directory), here is what works so far: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"]) Where I run into difficulties is getting the binary (.a file) linked in. No matter what I try, I always get an error about undefined references to the function calls for that library. I'm pretty sure it's a linkage issue, because I can fuss with the C code and make an intentional error in the function calls to that library which produces earlier errors that indicate that the function prototypes have been loaded and used to compile. I tried adding the location that contains the .a file to LDFLAGS and then doing a AC_CHECK_LIB but it is not found. Maybe my syntax is wrong, or maybe I'm missing something more fundamental, which would not be surprising since I'm a newbie and don't really know what I'm doing. Here is what I have tried: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location"; AC_CHECK_LIB(helper)]) No dice. AC_CHECK_LIB is looking for -lhelper I guess (or libhelper?) so I'm not sure if that's a problem, so I tried this, too (omit AC_CHECK_LIB and include the .a directly in LDFLAGS), without luck: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location/helper.a"]) To emulate the manual compilation, I tried removing the -L but that doesn't help: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS $location/helper.a"]) I tried other combinations and permutations, but I think I might be missing something more fundamental....

    Read the article

  • What is the best way to read and write cXML documents in C# ?

    - by tetranz
    I know this is a vague open ended question. I'm hoping to get some general direction. I need to add cXML punchout to an ASP.NET C# site / application. This is replacing something that I wrote years ago in ColdFusion. I'm a reasonably experienced C# developer but I haven't done much with XML. There seems to be lots of different options for processing XML in .NET. Here's the open ended question: Assuming that I have an XML document in some form, eg a file or a string, what is the best way to read it into my code? I want to get the data and then query databases etc. The cXML document size and our traffic volumes are easily small enough so that loading the a cXML document into memory is not a problem. Should I: 1) Manually build classes based on the dtd and use the XML Serializer? 2) Use a tool to generate classes. There are sample cXML files downloadable from Ariba.com. I tried xsd.exe to generate an xsd and then xsd.exe /c to generate classes. When I try to deserialize I get errors because there seems to be "confusion" around whether some elements should be single values or arrays. I tried the CodeXS online tool but that gives errors in it's log and errors if I try to deserialize a sample document. 2) Create a dataset and ReadXml()? 3) Create a typed dataset and ReadXml()? 4) Use Linq to XML. I often use Linq to Objects so I'm familiar with Linq in general but I'm struggling to see what it gives me in this situation. 5) Some other means. I guess I need to improve my understanding of XML in general but even so ... am I missing some obvious way of doing this? In the old ColdFusion site I found a free component ("tag") which basically ignored any schema and read the XML into a "structure" which is essentially a series of nested hash tables which was then easy to read in code. That was probably quite sloppy but it worked. I also need to generate XML files from my C# objects. Maybe Linq to XML will be good for that. I could start with a default "template" document and manipulate it before saving. Thanks for any pointers ...

    Read the article

  • Magento: add product twice to cart, with different attributes!

    - by Peter
    Hello all, i have been working with this for a whole day but i cannot find any solution: I have a product (lenses), which has identical attributes, but user can choose one attribute set for one eye and another attribute set for another. On the frontend i got it ok, see it here: http://connecta.si/clarus/index.php/featured/acuvue-oasys-for-astigmatism.html So the user can select attributes for left or right eye, but it is the same product. I build a function, wich should take a product in a cart (before save), add other set of attributes, so there should be two products in the cart. What happens is there are two products, but with the same set of attributes??? Here is the snippet of the function: $req = Mage::app()-getRequest(); $request[’qty’] = 1; $request[’product’] = 15; $request[’uenc’] = $req-get(’uenc’); $request[’options’][1] = 1; $request[’options’][3] = 5; $request[’options’][2] = 3; $reqo = new Varien_Object($request); $newitem = $quote-addProduct($founditem-getProduct(), $reqo); //add another one ------------------------------------------ $request[’qty’] = 1; $request[’product’] = 15; $request[’uenc’] = $req-get(’uenc’); $request[’options’][1] = 2; $request[’options’][3] = 6; $request[’options’][2] = 4; $reqo = new Varien_Object($request); $newitem = $quote-addProduct($founditem-getProduct(), $reqo); Or another test, with some other functions (again, product added, with 2 quantity , but same attributes...): $req = Mage::app()-getRequest(); $request[’qty’] = 1; $request[’product’] = 15; $request[’uenc’] = $req-get(’uenc’); $request[’options’][1] = 2; $request[’options’][3] = 6; $request[’options’][2] = 4; $product = $founditem-getProduct(); $cart = Mage::getSingleton(’checkout/cart’); //delete all first… $cart-getItems()-clear()-save(); $reqo = new Varien_Object($request); $cart-addProduct($founditem-getProduct(), $reqo); $cart-getItems()-save(); $request[’options’][1] = 1; $request[’options’][3] = 5; $request[’options’][2] = 3; $reqo = new Varien_Object($request); $cart-addProduct($founditem-getProduct(), $reqo); $cart-getItems()-save(); i really dont know what more to do, please any advice, this is my first module in magento… thank you, Peter

    Read the article

  • Android: Creating custom class of resources

    - by Sebastian
    Hi, R class on android has it's limitations. You can't use the resources dynamically for loading audio, pictures or whatever. If you wan't for example, load a set of audio files for a choosen object you can't do something like: R.raw."string-upon-choosen-object" I'm new to android and at least I didn't find how you could do that, depending on what objects are choosen or something more dynamic than that. So, I thought about making it dynamic with a little of memory overhead. But, I'm in doubt if it's worth it or just working different with external resources. The idea is this: Modify the ant build xml to execute my own task. This task, is a java program that parses the R.java file building a set of HashMaps with it's pair (key, value). I have done this manually and It's working good. So I need some experts voice about it. This is how I will manage the whole thing: Generate a base Application class, e.g. MainApplicationResources that builds up all the require methods and attributes. Then, you can access those methods invoking getApplication() and then the desired method. Something like this: package [packageName] import android.app.Application; import java.util.HashMap; public class MainActivityResources extends Application { private HashMap<String,Integer> [resNameObj1]; private HashMap<String,Integer> [resNameObj2]; ... private HashMap<String,Integer> [resNameObjN]; public MainActivityResources() { super(); [resNameObj1] = new HashMap<String,Integer>(); [resNameObj1].put("[resNameObj1_Key1]", new Integer([resNameObj1_Value1])); [resNameObj1].put("[resNameObj1_Key2]", new Integer([resNameObj1_Value2])); [resNameObj2] = new HashMap<String,Integer>(); [resNameObj2].put("[resNameObj2_Key1]", new Integer([resNameObj2_Value1])); [resNameObj2].put("[resNameObj2_Key2]", new Integer([resNameObj2_Value2])); ... [resNameObjN] = new HashMap<String,Integer>(); [resNameObjN].put("[resNameObjN_Key1]", new Integer([resNameObjN_Value1])); [resNameObjN].put("[resNameObjN_Key2]", new Integer([resNameObjN_Value2])); } public int get[ResNameObj1](String resourceName) { return [resNameObj1].get(resourceName).intValue(); } public int get[ResNameObj2](String resourceName) { return [resNameObj2].get(resourceName).intValue(); } ... public int get[ResNameObjN](String resourceName) { return [resNameObjN].get(resourceName).intValue(); } } The question is: Will I add too much memory use of the device? Is it worth it? Regards,

    Read the article

  • Criteria query returns hydrated object in SQLite but not SqlServer

    - by Berryl
    I have a method that returns a resource fully hydrated when the db is SQLite but when the identical code is used by SqlServer the object is not fully hydrated. I'll explain that with the code after some brief background. I my domain various otherwise unrelated things like an Employee or a Machine can be used as a Resource that can be allocated to. In the object model an example of this would be: /// <summary>Wraps a <see cref="StaffMember"/> in a <see cref="ResourceBase"/>. </summary> public class StaffMemberResource : ResourceBase { public virtual StaffMember StaffMember { get; private set; } public StaffMemberResource(StaffMember staffMember) { Check.RequireNotNull<StaffMember>(staffMember); base.BusinessId = staffMember.Number.ToString(); base.Name = staffMember.Name.ToString(); base.OrganizationName = staffMember.Department.Name; StaffMember = staffMember; } [UsedImplicitly] protected StaffMemberResource() { } } And in the db tables, there is a table per class inheritance where the ResourceBase has a discriminator and the id of the actual resource (ie, StaffMember) StaffMember - 1 ---- M- ResourceBase - 1 ----- M - Allocation The Code public override StaffMemberResource BuildResource(IActivityService activityService) { var sessionFactory = _GetSessionFactory(); var session = sessionFactory.GetCurrentSession(); StaffMemberResource result; using (var tx = session.BeginTransaction()) { var propertyName = ExprHelper.GetPropertyName<StaffMember>(x => x.Number); var staff = session.CreateCriteria<StaffMember>() .Add(Restrictions.Eq(propertyName, new EmployeeNumber(_testData.Resource_1.BusinessId))) .UniqueResult<StaffMember>(); if (staff == null) { ... build up a staff member result = new StaffMemberResource(staff); } else { ////////// var property = ExprHelper.GetPropertyName<StaffMemberResource>(x => x.StaffMember); result = session.CreateCriteria<StaffMemberResource>() .Add(Restrictions.Eq(property, staff)) .UniqueResult<StaffMemberResource>(); } /////////// tx.Commit(); } return result; } It's that second criteria query that works "properly" with SQLite but not with SqlServer. By properly I mean that the employee numer is translated into a ResourceBase.BusinessId, Name is flattened out into a ResourceBase.Name, etc. Does anyone know why this might be? Cheers, Berryl

    Read the article

  • Using one data source across multiple views in Kendo UI SPA

    - by user3731783
    I am trying to build a Kendo UI SPA. I have two views. View 1 (appListView) shows Application Details in a grid and view 2 (activityView) will have a dropdown for application names and a grid that shows the activity for selected application As I am loading all the application details on the loading of view 1, I would like to re-use those details to populate the dropdown on view 2. Please see my code below. Everything works fine but when I go to View 2 it makes a call to the service again to get application details. I would like to use the existing data if it is already loaded and if the uses comes to view 2 directly then it should get application data also. I am not sure what I am missing in the code. View Markup: <script id="appListView" type="text/x-kendo-template"> <h3 data-bind="html: displayName"></h3> <div data-role="grid" data-editable="{'mode':'popup'}" data-bind="source: items" data-columns="[ {'field': 'Name'}, {'field': 'ContactEmail','title':'Contact Email'} ]"> </div> </script> <script id="" type="text\x-kendo-template"> <div> Activity for Application&nbsp;&nbsp; <input name="AppName" data-role="dropdownlist" data-source="appsModel.items" data-text-field="Name" data-value-field="Id" data-option-label="Choose an application name" style="width:250px;" /> </div> <div id="Activities" data-role="grid" data-bind="source: items" data-auto-bind="false" data-columns="[ {'field': 'Domain','title':'Domain'}, {'field': 'ActivityType','title':'Activity Type'} ]"> </div> </script> js with DataSource and View Model: //data sources var applications = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, serverFiltering : true, transport: { read: { url: '/api/App', dataType: 'json', type:'GET' } } }); var activities = new kendo.data.DataSource({ schema: { model: { id: "Id" } }, transport: { read: { url: '/api/Activity', dataType: 'json', type: 'GET' }, parameterMap: function (data, type) { if (type == "read") { return 'appId=' + $("#AppName").val() ; } } } }); //Models var appsModel = kendo.observable({ items: applications, displayName: 'My Applications' }); var activityModel = kendo.observable({ items: activities, onAppChange: function(t){ $("#Activities").data("kendoGrid").dataSource.read(); }, dispayName: 'Application Activities' }); //views var layout = new kendo.Layout("layout-template"); var appListView = new kendo.View("appListView", { model: appsModel }); var activityView = new kendo.View("activityView", { model: activityModel }); Thank you for taking time to read this long question.

    Read the article

  • How to enable UAC prompt through programming

    - by peter
    I want to implement a UAC prompt for an application in visualc++ the operating system is 32bit x7460(2processor) Windowsserver 2008 the exe is myproject.exe through manifest.. Here for testing i wl build the application in Windows XP OS and copy the exe in to system containg the Windowsserver 2008 machine and replace it So what i did is i added a manifest like this name of that is myproject.exe.manifest My project has 3 folders like Headerfile,Resourcefile and Source file.I added this manifest(myproject.exe.manifest) in the Sourcefile folder containing other cpp and c code <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <assemblyIdentity version="11.1.4.0" processorArchitecture="X7460" name="myproject" type="win32"/> <description>myproject Problem</description> <!-- Identify the application security requirements. --> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false"/> </requestedPrivileges> </security> </trustInfo> </assembly> then i added this line of code in Resourcefile(.rc).Means one header file is there(Myproject.h).I added the line of code there #define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" Finally i did the following step 1. Open your project in Microsoft Visual Studio 2005. 2. Under Project, select Properties. 3. In Properties, select Manifest Tool, and then select Input and Output. 4. Add in the name of your application manifest file under Additional manifest files. 5. Rebuild your application. But i am getting lot of Syntax errors Is there any problems in the way which i followed.If i commented the line define MANIFEST_RESOURCE_ID 1 MANIFEST_RESOURCE_ID RT_MANIFEST "myproject.exe.manifest" which added in Myproject.h for adding values in .rc file there willnot any error other than this general error c1010070: Failed to load and parse the manifest. The system cannot find the file specified. .\myproject.exe.manifest How to enable UAC prompt through programming

    Read the article

  • Drupal 7 Forms API Conditional Logic not working in IE

    - by Francis Yaconiello
    I have a drupal 7 form with a bunch of fields: $form['account_type'] = array( '#title' => t('Utility Account Type'), '#type' => 'select', '#options' => necp_enrollment_administration_portal_account_type_options(), '#required' => TRUE, '#default_value' => isset($form_state['values']['account_type']) ? $form_state['values']['account_type'] : '', ); // Should show if account_type = 1 $form['home_wrapper'] = array( '#type' => 'fieldset', '#states' => array( 'visible' => array( ':input[name="account_type"]' => array('value' => 1), ), ), ); $form['home_wrapper']['first_name_1'] = array( '#title' => t('Primary Account First Name'), '#type' => 'textfield', '#default_value' => isset($form_state['values']['first_name_1']) ? $form_state['values']['first_name_1'] : '', '#states' => array( 'required' => array( ':input[name="account_type"]' => array('value' => 1), ), ), ); $form['home_wrapper']['last_name_1'] = array( '#title' => t('Primary Account Last Name'), '#type' => 'textfield', '#default_value' => isset($form_state['values']['last_name_1']) ? $form_state['values']['last_name_1'] : '', '#states' => array( 'required' => array( ':input[name="account_type"]' => array('value' => 1), ), ), ); // Should show if account_type = 2 $form['business_wrapper'] = array( '#type' => 'fieldset', '#states' => array( 'visible' => array( ':input[name="account_type"]' => array('value' => 2), ), ), ); $form['business_wrapper']['company_name'] = array( '#title' => t('Company/Organization'), '#type' => 'textfield', '#default_value' => isset($form_state['values']['company_name']) ? $form_state['values']['company_name'] : '', '#states' => array( 'required' => array( ':input[name="account_type"]' => array('value' => 2), ), ), ); In Firefox/Chrome/Opera all versions this form behaves as it should. However in all versions of IE the form initializes with display:none; style on all of the conditional fields regardless of what the value in account_type is. Changing the selected option of account_type does not effect the hidden status. Any tips on debugging this form would be awesome. Notes: I am not much of a Drupal developer, I inherited this site. Just trying to iron out the last couple bugs so we can go live there are more fields than are listed above, I just gave you some of the applicable ones so that you could get the gist of how my forms were setup current url for the form in development: https://northeastcleanpower.com/enroll_new I'm using http://www.browserstack.com/ to debug IE 7 - 10pp4 (I think we only have to support 8 and up though) I've also tried: ':select[name="account_type"]' => array('value' => 1), '#edit-account-type' => array('value' => 1),

    Read the article

  • C++: casting to void* and back

    - by MInner
    * ---Edit - now the whole sourse* When I debug it on the end, "get" and "value" have different values! Probably, I convert to void* and back to User the wrong way? #include <db_cxx.h> #include <stdio.h> struct User{ User(){} int name; int town; User(int a){}; inline int get_index(int a){ return town; } //for another stuff }; int main(){ try { DbEnv* env = new DbEnv(NULL); env->open("./", DB_CREATE | DB_INIT_MPOOL | DB_THREAD | DB_INIT_LOCK | DB_INIT_TXN | DB_RECOVER | DB_INIT_LOG, 0); Db* datab = new Db(env, 0); datab->open(NULL, "db.dbf", NULL, DB_BTREE, DB_CREATE | DB_AUTO_COMMIT, 0); Dbt key, value, get; char a[10] = "bbaaccd"; User u; u.name = 1; u.town = 34; key.set_data(a); key.set_size(strlen(a) + 1 ); value.set_data((void*)&u); value.set_size(sizeof(u)); get.set_flags(DB_DBT_MALLOC); DbTxn* txn; env->txn_begin(NULL, &txn, 0); datab->put(txn, &key, &value, 0); datab->get(txn, &key, &get, 0); txn->commit(0); User g; g = *((User*)&get); printf("%d", g.town); getchar(); return 0; }catch (DbException &e){ printf("%s", e.what()); getchar(); } solution create a kind of "serializator" what would convert all POD's into void* and then will unite these pieces PS Or I'd rewrite User into POD type and everything will be all right, I hope. Add It's strange, but... I cast a defenetly non-pod object to void* and back (it has std::string inside) and it's all right (without sending it to the db and back). How could it be? And after I cast and send 'trough' db defenetly pod object (no extra methods, all members are pod, it's a simple struct {int a; int b; ...}) I get back dirted one. What's wrong with my approach? Add about week after first 'add' Damn... I've compiled it ones, just for have a look at which kind of dirt it returnes, and oh! it's okay!... I can't ! ... AAh!.. Lord... A reasonable question (in 99.999 percent of situations right answer is 'my', but... here...) - whos is this fault? My or VSs?

    Read the article

  • Change form field options dynamically (and restore previously selected options) using jQuery

    - by Martin
    I'm fairly new to both jQuery and JavaScript so can anyone please point out what I'm missing here? I have a form with multiple pairs of select fields, where the content of the second field relies on the selection of the first one. Once a value is chosen from the first one then a piece of jQuery code empties the corresponding second select options, gets new values as JSON and inserts them as new select values. All of this works with the code I've written. However if a user has chosen a second value, I'd like to restore it if possible (meaning if the new values also include the one previously selected). I tried to do this by first saving the second value and then trying to set it once the new options have been inserted. This did not work. However when trying to debug I inserted an alert() just before restoring the second value and with that in there it does work... Am I missing something very basic here? Any help would be appreciated. HTML: <select name="party-0-first" id="id_party-0-first"> <option value="" selected="selected">---------</option> <option value="1">first_value1</option> <option value="2">first_value2</option> <option value="3">first_value3</option> <option value="4">first_value4</option> </select> <select name="party-0-second" id="id_party-0-second"> <option value="" selected="selected">---------</option> <option value="1">second_value1</option> <option value="2">second_value2</option> <option value="3">second_value3</option> <option value="4">second_value4</option> </select> <select name="party-1-first" id="id_party-1-first"> ... JavaScript: $.fn.setSecond = function() { var divName = $(this).attr("name"); divName = divName.replace('-first', ''); divName = divName.replace('party-', ''); // Save the currently selected value of the "second" select box // This seems to work properly setValue = $('#id_party-' + divName + '-second').val(); // Get the new values $.getJSON("/json_opsys/", { client: $(this).val() }, function(data){ $('#id_party-' + divName + '-second').empty(); $('#id_party-' + divName + '-second').append('<option value="" selected="selected">---------</option>'); $.each(data, function(i, item){ $('#id_party-' + divName + '-second').append('<option value="' + item.pk + '">' + item.fields.name + '</option>'); }); }); // Here I try to restore the selected value // This does not work normally, however if I place a javascript alert() before // this for some reason it does work $('#id_party-' + divName + '-second').val(setValue); $(document).ready(function(){ $('[id*=first]').change(function() { $(this).setSecond(); }); });

    Read the article

  • mulformed Farsi characters on AWT

    - by jlover2010
    Hi As i started programming by jdk6, i had no problem in text components neither in awt nor in swing. But for labels or titles of awt components, yes : I couldn't have Farsi characters displayable on AWTs just as simple as Swing by typing them into the source code. lets check this SSCCE : import javax.swing.*; import java.awt.*; import java.io.*; import java.util.Properties; public class EmptyFarsiCharsOnAWT extends JFrame{ public EmptyFarsiCharsOnAWT() { super("????"); setDefaultCloseOperation(3); setVisible(rootPaneCheckingEnabled); } public static void main(String[] args) throws AWTException, IOException { JFrame jFrame = new EmptyFarsiCharsOnAWT(); MenuItem show ; // approach 1 = HardCoding : /* show = new MenuItem("\u0646\u0645\u0627\u06cc\u0634"); * */ // approach 2 = using simple utf-8 saved text file : /* BufferedReader in = new BufferedReader(new FileReader("farsiLabels.txt")); String showLabel = in.readLine(); in.close(); show = new MenuItem(showLabel); * */ // approach 3 = using properties file : FileReader in = new FileReader("farsiLabels.properties"); Properties farsiLabels = new Properties(); farsiLabels.load(in); show = new MenuItem(farsiLabels.getProperty("tray.show")); PopupMenu popUp = new PopupMenu(); popUp.add(show); // creating Tray object Image iconIamge = Toolkit.getDefaultToolkit().getImage("greenIcon.png"); TrayIcon trayIcon = new TrayIcon(iconIamge, null, popUp); SystemTray tray = SystemTray.getSystemTray(); tray.add(trayIcon); jFrame.setIconImage(iconIamge); } } Yes, i know each of three approaches in source code does right when you may test it from IDE , but if you make a JAR contains just this class (and its resources) by means of NetBeans project clean&build ,you won't see the expected characters and will just get EMPTY/BLANK SQUARES ! Unfortunately, opposed to other situations i encountered before, here there is no way to avoid using awt and make use of Swing in this case. And this was just an SSCCE i made to show the problem and my recent (also first) application suffers from this subject. And i think should have to let you know I posted this question a while ago in SDN and "the Java Ranch" forums and other native forums and still I'm watching... By the way i am using latest version of Netbeans IDE... I will be so grateful if anybody has a solution to this damn AWT components never rendered any Farsi character for me... Thanks in advance

    Read the article

  • Assembly Load and loading the "sub-modules" dependencies - "cannot find the file specified"

    - by Ted
    There are several questions out there that ask the same question. However the answers they received I cannot understand, so here goes: Similar questions: http://stackoverflow.com/questions/1874277/dynamically-load-assembly-and-manually-force-path-to-get-referenced-assemblies ; http://stackoverflow.com/questions/22012/loading-assemblies-and-its-dependencies-closed The question in short: I need to figure out how dependencies, ie References in my modules can be loaded dynamically. Right now I am getting "The system cannot find the file specified" on Assemblies referenced in my so called modules. I cannot really get how to use the AssemblyResolve event... The longer version I have one application, MODULECONTROLLER, that loads separate modules. These "separate modules" are located in well-known subdirectories, like appBinDir\Modules\Module1 appBinDir\Modules\Module2 Each directory contains all the DLLs that exists in the bin-directory of those projects after a build. So the MODULECONTROLLER loads all the DLLs contained in those folders using this code: byte[] bytes = File.ReadAllBytes(dllFileFullPath); Assembly assembly = null; assembly = Assembly.Load(bytes); I am, as you can see, loading the byte[]-array (so I dont lock the DLL-files). Now, in for example MODULE1, I have a static reference called MyGreatXmlProtocol. The MyGreatXmlProtocol.dll then also exists in the directory appBinDir\Modules\Module1 and is loaded using the above code When code in the MODULE1 tries to use this MyGreatXmlProtocol, I get: Could not load file or assembly 'MyGreatXmlProtocol, Version=1.0.3797.26527, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. So, in a post (like this one) they say that To my understanding reflection will load the main assembly and then search the GAC for the referenced assemblies, if it cannot find it there, you can then incorparate an assemblyResolve event: First; is it really needed to use the AssemblyResolve-event to make this work? Shouldnt my different MODULEs themself load their DLLs, as they are statically referenced? Second; if AssemblyResolve is the way to go - how do I use it? I have attached a handler to the Event but I never get anything on MyGreatXmlProctol... === EDIT === CODE regarding the AssemblyResolve-event handler: public GUI() { InitializeComponent(); AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); ... } // Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args) { Console.WriteLine(args.Name); return null; } Hope I wasnt too fuzzy =) Thx

    Read the article

  • Interoperability between two AES algorithms

    - by lpfavreau
    Hello, I'm new to cryptography and I'm building some test applications to try and understand the basics of it. I'm not trying to build the algorithms from scratch but I'm trying to make two different AES-256 implementation talk to each other. I've got a database that was populated with this Javascript implementation stored in Base64. Now, I'm trying to get an Objective-C method to decrypt its content but I'm a little lost as to where the differences in the implementations are. I'm able to encrypt/decrypt in Javascript and I'm able to encrypt/decrypt in Cocoa but cannot make a string encrypted in Javascript decrypted in Cocoa or vice-versa. I'm guessing it's related to the initialization vector, nonce, counter mode of operation or all of these, which quite frankly, doesn't speak to me at the moment. Here's what I'm using in Objective-C, adapted mainly from this and this: @implementation NSString (Crypto) - (NSString *)encryptAES256:(NSString *)key { NSData *input = [self dataUsingEncoding: NSUTF8StringEncoding]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:TRUE]; return [Base64 encode:output]; } - (NSString *)decryptAES256:(NSString *)key { NSData *input = [Base64 decode:self]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:FALSE]; return [[[NSString alloc] initWithData:output encoding:NSUTF8StringEncoding] autorelease]; } + (NSData *)cryptoAES256:(NSData *)input key:(NSString *)key doEncrypt:(BOOL)doEncrypt { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [input length]; // See the doc: For block ciphers, the output size will always be less than or // equal to the input size plus the size of one block. // That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesCrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(doEncrypt ? kCCEncrypt : kCCDecrypt, kCCAlgorithmAES128, kCCOptionECBMode | kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, nil, // initialization vector (optional) [input bytes], dataLength, // input buffer, bufferSize, // output &numBytesCrypted ); if (cryptStatus == kCCSuccess) { // the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesCrypted]; } free(buffer); // free the buffer; return nil; } @end Of course, the input is Base64 decoded beforehand. I see that each encryption with the same key and same content in Javascript gives a different encrypted string, which is not the case with the Objective-C implementation that always give the same encrypted string. I've read the answers of this post and it makes me believe I'm right about something along the lines of vector initialization but I'd need your help to pinpoint what's going on exactly. Thank you!

    Read the article

  • How do I create a Spring 3 + Tiles 2 webapp using REST-ful URLs?

    - by Ichiro Furusato
    I'm having a heck of a time resolving URLs with Spring 3.0 MVC. I'm just building a HelloWorld to try out how to build a RESTful webapp in Spring, nothing theoretically complicated. All of the examples I've been able to find are based on configurations that pay attention to file extensions ("*.htm" or "*.do"), include an artificial directory name prefix ("/foo") or even prefix paths with a dot (ugly), all approaches that use some artificial regex pattern as a signal to the resolver. For a REST approach I want to avoid all that muck and use only the natural URL patterns of my application. I would assume (perhaps incorrectly) that in web.xml I'd set a url-pattern of "/*" and pass everything to the DispatcherServlet for resolution, then just rely on URL patterns in my controller. I can't reliably get my resolver(s) to catch the URL patterns, and in all my trials this results in a resource not found error, a stack overflow (loop), or some kind of opaque Spring 3 ServletException stack trace — one of my ongoing frustrations with Spring generally is that the error messages are not often very helpful. I want to work with a Tiles 2 resolver. I've located my *.jsp files in WEB-INF/views/ and have a single line index.jsp file at the application root redirecting to the index file set by my layout.xml (the Tiles 2 Configurer). I do all the normal Spring 3 high-level configuration: <mvc:annotation-driven /> <mvc:view-controller path="/" view-name="index"/> <context:component-scan base-package="com.acme.web.controller" /> ...followed by all sorts of combinations and configurations of UrlBasedViewResolver, InternalResourceViewResolver, UrlFilenameViewController, etc. with all manner of variantions in my Tiles 2 configuration file. Then in my controller I've trying to pick up my URL patterns. Problem is, I can't reliably even get the resolver(s) to catch the patterns to send to my controller. This has now stretched to multiple days with no real progress on something I thought would be very simple to implement. I'm perhaps trying to do too much at once, though I would think this should be a simple (almost a default) configuration. I'm just trying to create a simple HelloWorld-type application, I wouldn't expect this is rocket science. Rather than me post my own configurations (which have ranged all over the map), does anyone know of an online example that: shows a simple Spring 3 MVC + Tiles 2 web application that uses REST-ful URLs (i.e., avoiding forced URL patterns such as file extensions, added directory names or dots) and relies solely on Spring 3 code/annotations (i.e., nothing outside of Spring MVC itself) to accomplish this? Is there an easy way to do this? Thanks very much for any help.

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • Using a Button to navigate to another Page in a NavigationWindow

    - by Will
    I'm trying to use the navigation command framework in WPF to navigate between Pages within a WPF application (desktop; not XBAP or Silverlight). I believe I have everything configured correctly, yet its not working. I build and run without errors, I'm not getting any binding errors in the Output window, but my navigation button is disabled. Here's the app.xaml for a sample app: <Application x:Class="Navigation.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" StartupUri="First.xaml"> </Application> Note the StartupUri points to First.xaml. First.xaml is a Page. WPF automatically hosts my page in a NavigationWindow. Here's First.xaml: <Page x:Class="Navigation.First" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="First"> <Grid> <Button CommandParameter="/Second.xaml" CommandTarget="{Binding RelativeSource= {RelativeSource FindAncestor, AncestorType={x:Type NavigationWindow}}}" Command="NavigationCommands.GoToPage" Content="Go!"/> </Grid> </Page> The button's CommandTarget is set to the NavigationWindow. The command is GoToPage, and the page is /Second.xaml. I've tried setting the CommandTarget to the containing Page, the CommandParameter to "Second.xaml" (First.xaml and Second.xaml are both in the root of the solution), and I've tried leaving the CommandTarget empty. I've also tried setting the Path to the Binding to various navigational-related public properties on the NavigationWindow. Nothing has worked so far. What am I missing here? I really don't want to do my navigation in code. Clarification. If, instead of using a button, I use a Hyperlink: <Grid> <TextBlock> <Hyperlink NavigateUri="Second.xaml">Go! </Hyperlink> </TextBlock> </Grid> everything works as expected. However, my UI requirements means that using a Hyperlink is right out. I need a big fatty button for people to press. That's why I want to use the button to navigate. I just want to know how I can get the Button to provide the same ability that the Hyperlink does in this case.

    Read the article

  • how get fully result from Asynchronism communication?

    - by rima
    Hi all refer to these post : here1 and here2 at last I solve my problem by build a asynchronous solution,and it work well!!! but there is a problem that i face with it,now my code is like this: class MyProcessStarter { private Process process; private StreamWriter myStreamWriter; private static StringBuilder shellOutput = null; public String GetShellOutput { get { return shellOutput.ToString(); }} public MyProcessStarter(){ shellOutput = new StringBuilder(""); process = new Process(); process.StartInfo.FileName = "sqlplus"; process.StartInfo.UseShellExecute = false; process.StartInfo.CreateNoWindow = true; process.OutputDataReceived += new DataReceivedEventHandler(ShellOutputHandler); process.StartInfo.RedirectStandardInput = true; process.StartInfo.RedirectStandardOutput = true; //process.StartInfo.RedirectStandardError = true; process.Start(); myStreamWriter = process.StandardInput; process.BeginOutputReadLine(); } private static void ShellOutputHandler(object sendingProcess,DataReceivedEventArgs outLine) { if (!String.IsNullOrEmpty(outLine.Data)) shellOutput.Append(Environment.NewLine + outLine.Data); } public void closeConnection() { myStreamWriter.Close(); process.WaitForExit(); process.Close(); } public void RunCommand(string arguments) { myStreamWriter.WriteLine(arguments); myStreamWriter.Flush(); process.WaitForExit(100); Console.WriteLine(shellOutput); Console.WriteLine("============="+Environment.NewLine); process.WaitForExit(2000); Console.WriteLine(shellOutput); } } and my input is like this: myProcesStarter.RunCommand("myusername/mypassword"); Console.writeline(myProcesStarter.GetShellOutput); but take a look at my out put: SQL*Plus: Release 11.1.0.6.0 - Production on Thu May 20 11:57:38 2010 Copyright (c) 1982, 2007, Oracle. All rights reserved. ============= SQL*Plus: Release 11.1.0.6.0 - Production on Thu May 20 11:57:38 2010 Copyright (c) 1982, 2007, Oracle. All rights reserved. Enter user-name: Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options as u see the output for run a function is not same in different time!So now would you do me a faver and help me that how I can wait until all the output done in other mean how I can customize my process to wait until output finishing ?? because I want to write a sqlcompiler so I need the exact output of shell. plz help me soon.thanxxxxxxxxxxxx :X

    Read the article

  • Sometimes big delay when using PHP mail()

    - by Robbert Dam
    I have a website which processes orders from a Windows application. This works as follows: User clicks "Order now" in the windows app App uploads a file with POST to a PHP script The script immediately calls the PHP mail() function (order is not stored in a db) This works fine most of the time. However, sometimes a big delay occurs (several days). Customers calls why the product has not yet been delivered. E-mail headers of delayed mail follows: Microsoft Mail Internet Headers Version 2.0 Received: from barracuda.nkl.nl ([10.0.0.1]) by smtp.nkl.nl with Microsoft SMTPSVC(6.0.3790.3959); Wed, 26 May 2010 16:26:51 +0200 X-ASG-Debug-ID: 1274883818-2f8800000000-X58hIK X-Barracuda-URL: http://10.0.0.1:8000/cgi-bin/mark.cgi Received: from server45.firstfind.nl (localhost [127.0.0.1]) by barracuda.nkl.nl (Spam & Virus Firewall) with ESMTP id ECAFD15776A for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) Received: from server45.firstfind.nl (server45.firstfind.nl [93.94.226.76]) by barracuda.nkl.nl with ESMTP id 85bAT2AU58kkxjPb for <[email protected]>; Wed, 26 May 2010 16:23:38 +0200 (CEST) X-Barracuda-Envelope-From: [email protected] Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 Date: Wed, 19 May 2010 11:47:10 +0200 Message-Id: <[email protected]> To: [email protected] X-ASG-Orig-Subj: easyfit - ref: Hoen3443 Subject: easyfit - ref: Hoen3443 X-PHP-Script: www.nklseminar.nl/emailer/upload.php for 77.61.220.217 From: [email protected] Content-type: text/html X-Virus-Scanned: by amavisd-new X-Barracuda-Connect: server45.firstfind.nl[93.94.226.76] X-Barracuda-Start-Time: 1274883820 X-Barracuda-Bayes: INNOCENT GLOBAL 0.0000 1.0000 -2.0210 X-Barracuda-Virus-Scanned: by Barracuda Spam & Virus Firewall at nkl.nl X-Barracuda-Spam-Score: 0.92 X-Barracuda-Spam-Status: No, SCORE=0.92 using global scores of TAG_LEVEL=2.0 QUARANTINE_LEVEL=1000.0 KILL_LEVEL=3.0 tests=DATE_IN_PAST_96_XX, DATE_IN_PAST_96_XX_2, HTML_MESSAGE, MIME_HEADER_CTYPE_ONLY, MIME_HTML_ONLY, NO_REAL_NAME X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.2.30817 Rule breakdown below pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 NO_REAL_NAME From: does not include a real name 0.01 DATE_IN_PAST_96_XX Date: is 96 hours or more before Received: date 0.00 MIME_HTML_ONLY BODY: Message only has text/html MIME parts 0.00 HTML_MESSAGE BODY: HTML included in message 0.86 MIME_HEADER_CTYPE_ONLY 'Content-Type' found without required MIME headers 2.07 DATE_IN_PAST_96_XX_2 DATE_IN_PAST_96_XX_2 Return-Path: [email protected] X-OriginalArrivalTime: 26 May 2010 14:26:51.0343 (UTC) FILETIME=[7B80DDF0:01CAFCDF] The delay seems to occur here: Received: from server45.firstfind.nl (localhost [127.0.0.1]) by server45.firstfind.nl (8.13.8/8.13.8/Debian-3+etch1) with ESMTP id o4QEM3Hb004301 for <[email protected]>; Wed, 26 May 2010 16:23:31 +0200 Received: (from nklsemin@localhost) by server45.firstfind.nl (8.13.8/8.13.8/Submit) id o4J9lA7M031307; Wed, 19 May 2010 11:47:10 +0200 I've reported this issue various times to the web hosting service that hosts my website. They say the delay does not occur in their network (impossible). But they do confirm that the e-mail is first seen in their mail server on May 26, which is 7 days after the mail has been composed. The order is marked with the timestamp of the user's local PC, which also matches May 19 (so it's not a PC clock problem) It's also interesting to see that all delayed mails (orders were placed on different days) come in at once. So I suddenly receive 14 e-mail in my mailbox from various days. Any idea were this delay may be introduced? Could there be a bug in my PHP code that causes this? (I cannot believe I can introduce a loop of 7 days in my PHP code)

    Read the article

  • SVN: Branches for Every Little Change?

    - by yar
    Hi. We have a client (who has a client, who has a client) who is driving us mad with change requests to a code base (in PHP). Our first response was to just work in a main trunk in SVN, but the client often comes back and requests that a certain change needs to get pushed to the live servers ASAP. On the other hand, other changes get reduced in priority suddenly, which originally came grouped with other changes (seemingly). We are thinking of using a branch for every change request. Is this mad? What other solutions might work? Thanks! Edit: This is a really hard question to choose the correct answer for. Thanks to everybody for your great answers. Edit: I know that the best answer I chose was not particularly popular. I too wanted to find a technical solution to this problem. But now I think that if the client wants software with features that can be deployed in a modular fashion... this problem should not be solved in our use of the version control system. It would have to be designed into the software. Edit: Now it's almost a month later and my coworker/client has convinced me that multiple branches is the way to go. This is not just due to the client's insanity, but also based on our need to be able to determine if a feature is "ready to go" or "needs more work" or whatever. I don't have the SVN with me, but we merge using the advice from the SVN Cookbook: you merge the branch from the revision it was branched to the head revision. Also, using this system, we merge all branches at some point and that becomes the new QA and then live build. Then we branch from that. Last Edit (Perhaps): Months later, this system is still working out for us. We create branches for every ticket and rarely have problems. On the other hand, we do try to keep things separate as far as what people are working on... Two Years Later: We use GIT now, and now this system is actually quite reasonable.

    Read the article

< Previous Page | 864 865 866 867 868 869 870 871 872 873 874 875  | Next Page >