Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 1092/1338 | < Previous Page | 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099  | Next Page >

  • adding stock data to amibroker using c#

    - by femi
    hello, I have had a hard time getting and answer to this and i would really , really appreciate some help on this. i have been on this for over 2 weeks without headway. i want to use c# to add a line of stock data to amibroker but i just cant find a CLEAR response on how to instantiate it in C#. In VB , I would do it something like; Dim AmiBroker = CreateObject("Broker.Application") sSymbol = ArrayRow(0).ToUpper Stock = AmiBroker.Stocks.Add(sSymbol) iDate = ArrayRow(1).ToLower quote = Stock.Quotations.Add(iDate) quote.Open = CSng(ArrayRow(2)) quote.High = CSng(ArrayRow(3)) quote.Low = CSng(ArrayRow(4)) quote.Close = CSng(ArrayRow(5)) quote.Volume = CLng(ArrayRow(6)) The problem is that CreateObject will not work in C# in this instance. I found the code below somewhere online but i cant seem to understand how to achieve the above. Type objClassType; objClassType = Type.GetTypeFromProgID("Broker.Application"); // Instantiate AmiBroker objApp = Activator.CreateInstance(objClassType); objStocks = objApp.GetType().InvokeMember("Stocks", BindingFlags.GetProperty,null, objApp, null); Can anyone help me here? Thanks

    Read the article

  • Jeditable with jQuery UI Datepicker

    - by BrynJ
    I need to have a click to edit element on a page, that will in turn invoke an instance of the jQuery UI Datepicker. Currently, I'm using JEditable to provide the in place editing, which is working fine. However, I have a date control input that I would like to have appear as a calendar, which is where the fun starts. I've found a Comment in the this blog by Calle Kabo (the page is a little mashed unfortunately) that details a way to do this: $.editable.addInputType("datepicker", { element: function(settings, original) { var input = $("<input type=\"text\" name=\"value\" />"); $(this).append(input); return(input); }, plugin: function(settings, original) { var form = this; $("input", this).filter(":text").datepicker({ onSelect: function(dateText) { $(this).hide(); $(form).trigger("submit"); } }); } }); However, I can't get the above to work - no errors, but no effect either. I've tried placing it within the jQuery document ready function and also outside of it - no joy. My UI Datepicker class is date-picker and my Jeditable class is ajaxedit, I'm sure the above inaction is due to the need to reference them somehow in the code, but I don't know how. Also, the Jeditable control is one of many element ids, if that has a bearing. Any ideas from those more in the know?

    Read the article

  • Java design: too many getters

    - by dege
    After writing a few lesser programs when learning Java the way I've designed the programs is with Model-View-Control. With using MVC I have a plethora of getter methods in the model for the view to use. It feels that while I gain on using MVC, for every new value added I have to add two new methods in the model which quickly get all cluttered with getter & setters. So I was thinking, maybe I should use the notifyObserver method that takes an argument. But wouldn't feel very smart to send every value by itself either so I figured, maybe if I send a kind of container with all the values, preferably only those that actually changed. What this would accomplish would be that instead of having a whole lot of getter methods I could just have one method in the model which put all relevant values in the container. Then in the view I would have a method called from the update which extracted the values from the container and assigning them to the correct fields. I have two questions concerning this. First: is this actually a viable way to do this. Would you recommend me doing something along these lines? Secondly: if I do use this plan and I don't want to keep sending fields that didn't actually change. How would I handle that without having to have if statements to check if the value is not null for every single value?

    Read the article

  • Are there any applications written in the Io programming language? (Or, distributing Io applications

    - by Rayne
    I've recently become interested in prototype-based OOP, and I've been playing with Io and Ioke. Distributing an application with Ioke is simple. It's on the JVM. Need I say more? However, I'm absolutely stumped as to how one would distribute an Io application, especially on Windows. It's not like you can have end-users compile Io to run your application. I was actually shocked the Io has gone for 8 years without forming some sort of standards for things like distribution. Ruby has gems, Java has jars, and so on. The worse thing about it is, I can't find a single application written in Io to maybe steal ideas on distribution from. Maybe I suck at google searching (Io is a horrible search name, by the way ;P). Is there any sort of canonical way to distribute Io applications? Are there even any Io applications in existence, or am I just missing the point? I'm not sure if this should be community wiki or not. If you think it should, comment and let me know.

    Read the article

  • Managing common components with Fossil CVS

    - by Larry Lustig
    I'm a Fossil (and CVS configuration novice) attempting to create and manage a set of distributed Fossil repositories for a Delphi project. I have the following directory tree: Projects Some Project Delphi Components LookupListView Some Client Some Project For Client Some Other Project For Client Source Code Project Resources Project Database I am setting up Fossil version control in order to version and share Projects\Some Client\Some Other Project For Client\Source Code, which contains Delphi 2010 source for a database project. This project makes use of Projects\Delphi Components\LookupListView which is a Delphi component. I need this code to be included in the versioning system for my project. I will, in theory, need to include it in other Fossil repositories in the future, as well. If I create my Fossil repository at the Source Code or Some Other Project For Client level, I cannot add any code above that level to my repository. What is the proper way to deal with this? The two solutions that occur to me are 1) Creating a separate repository for LookupListView and make sure that everyone who uses a repository for a project that references it "knows" that they must also get the current version of this project as well. This seems to defeat the purpose of being able to obtain a complete, current version of the project with a single checkout. The problem is magnified because there are other common component dependencies in this project. 2) Establishing my Fossil repository in the Projects directory, so I can check in files from various subfolders. This seems to me to involve an awful lot of extra path-typing when doing adds, and also to impose my directory structure (Some Client\Some Other Project For Client\Source) on the other users of the repository -- in this case, the actual client. Any suggestions appreciated.

    Read the article

  • How to make Nlog archive a file with the date when the logging took place.

    - by Carl Bergquist
    Hi, We are using Nlog as our logging framework and I cannot find a way to archive files the way I want. I would like to have the date of when the logging took place in the logging file name. Ex All logging that happend from 2009-10-01 00:00 -> 2009-10-01:23:59 should be placed in Log.2009-10-01.log The current NLog.config that I use looks like this. <?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <extensions> <add assembly="My.Awesome.LoggingExentions"/> </extensions> <targets> <target name="file1" xsi:type="File" fileName="${basedir}/Logs/Log.log" layout="${longdate} ${level:uppercase=true:padding=5} ${session} ${storeid} ${msisdn} - ${logger:shortName=true} - ${message} ${exception:format=tostring}" archiveEvery="Day" archiveFileName="${basedir}/Logs/Log${shortdate}-{#}.log" archiveNumbering="Sequence" maxArchiveFiles="99999" keepFileOpen="true" /> </targets> <rules> <logger name="*" minlevel="Trace" writeTo="file1" /> </rules> </nlog> This however sets the date in the logfile to the date when the new logfile is created. Which cause frustration when you want to read logs later. It also seems like I have to have atleast on # in the archiveFileName, which I rather not. So if you got a solution for that also I would be twice as thankful =)

    Read the article

  • Sugar CRM Soap call not working properly

    - by Jasim
    I have sugar crm instance and i was trying to get some data from it using soap service. Below is the code which i am using for it. When i run the same code , sometimes it is returning correct data, sometimes it not. Can any one tell me what the problem is?? include "nusoap.php"; $client = new soapclient('http://asdf.net/test/urbancrm_2009_06_22/soap.php'); // Login to SugarCRM $auth_array = array( 'user_auth' => array( 'user_name' => '******', 'password' => '*******' ), ); $response = $client->call('login', $auth_array); if (!$response['error']['number']){ // Successfully logged in $session_id = $response['id']; //$response = $client->call('get_entry_list',array('session'=>$session_id , 'module_name'=>'Users', 'query'=>'', 'order_by'=>'','offset'=>'','select_fields'=>array('id','user_name'))); $response = $client->call('get_entry_list',array('session'=>$session_id , 'module_name'=>'itf_Apartments', "query"=>"itf_apartments_cstm.neighborhood_c='Loop'", 'order_by'=>'','offset'=>'','select_fields'=>array('name','studio','convertible','one_bedroom','one_bedroom_plus_den','two_bedroom','two_bedroom_plus_den','penthouse','photo_c','building_type_c','neighborhood_c'))); //$response = $client->call('get_entry_list',array('session'=>$session_id , 'module_name'=>'itf_Apartments', 'query'=>'itf_apartments_cstm.urbanlux_id_c="1"', 'order_by'=>'','offset'=>'','select_fields'=>array('name','studio','convertible','one_bedroom','one_bedroom_plus_den','two_bedroom','two_bedroom_plus_den','penthouse',))); //store id and user name as a key value pair in array //echo "---"; print_r($response); } else { echo "else"; print_r($response); } ?

    Read the article

  • How do I debug a HTTP 502 error?

    - by Bialecki
    I have a Python Tornado server sitting behind a nginx frontend. Every now and then, but not every time, I get a 502 error. I look in the nginx access log and I see this: 127.0.0.1 - - [02/Jun/2010:18:04:02 -0400] "POST /a/question/updates HTTP/1.1" 502 173 "http://localhost/tagged/python" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" and in the error log: 2010/06/02 18:04:02 [error] 14033#0: *1700 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: _, request: "POST /a/question/updates HTTP/1.1", upstream: "http://127.0.0.1:8888/a/question/updates", host: "localhost", referrer: "http://localhost/tagged/python" I don't think any errors show up in the Tornado log. How would you go about debugging this? Is there something I can put in the Tornado or nginx configuration to help debug this? EDIT: In addition, I get a fair number of 504, gateway timeout errors. Is it possible that the Tornado instance is just busy or something?

    Read the article

  • Mirth is Not Picking Up Updated JAR File?

    - by ashes999
    I have some Mirth code (javascript) that's consuming one of my Java classes (let's call it SimpleClass). It seems that Mirth is not picking up my changes to my class. SimpleClass used to look something like this: public class Simpleclass { public SimpleClass(String one) { // do something with one } } Now it looks like: public class Simpleclass { public SimpleClass(String one, String two) { // store one and two } public void Execute() { // do something with one and two } } When I try and call SimpleClass with two strings, I get the error "Java constructor for SimpleClass with arguments string, string not found." When I try to run Execute, I get an error similar to "TypeError: method Execute not found." Perplexingly, I can still call SimpleClass("one string") to call the one-string constructor (which is now nowhere to be seen in the code). To get my code to work, I build a JAR file using ant, and deploy it to \\lib\custom. I have tried: Copying the new JAR and restarting Mirth while it's live Copying the new JAR after stopping Mirth, then starting Mirth Looking for other copies of my JAR or Mirth installations (there are none) Unzipping and reverse-engineering the JAR (it has the two-constructor code) Hashing the JAR (it matches what I'm building in ant) Restarting my computer (hey, you never know) I'm at a loss. I'm not sure why I'm not seeing the latest code. My channel has very simple Javascript: var sp = new Packages.com.hs.channel.scarborough.ScarboroughParser("one", "two"); I know it's not a configuration error of any kind, since this used to work (and does still work, albeit with the old code -- single-parameter constructor).

    Read the article

  • Far jump in ntdll.dll's internal ZwCreateUserProcess

    - by user49164
    I'm trying to understand how the Windows API creates processes so I can create a program to determine where invalid exes fail. I have a program that calls kernel32.CreateProcessA. Following along in OllyDbg, this calls kernel32.CreateProcessInternalA, which calls kernel32.CreateProcessInternalW, which calls ntdll.ZwCreateUserProcess. This function goes: mov eax, 0xAA xor ecx, ecx lea edx, dword ptr [esp+4] call dword ptr fs:[0xC0] add esp, 4 retn 0x2C So I follow the call to fs:[0xC0], which contains a single instruction: jmp far 0x33:0x74BE271E But when I step this instruction, Olly just comes back to ntdll.ZwCreateUserProcess at the add esp, 4 right after the call (which is not at 0x74BE271E). I put a breakpoint at retn 0x2C, and I find that the new process was somehow created during the execution of add esp, 4. So I'm assuming there's some magic involved in the far jump. I tried to change the CS register to 0x33 and EIP to 0x74BE271E instead of actually executing the far jump, but that just gave me an access violation after a few instructions. What's going on here? I need to be able to delve deeper beyond the abstraction of this ZwCreateUserProcess to figure out how exactly Windows creates processes.

    Read the article

  • Project setup for an ADO.NET/WCF DataService

    - by Slauma
    I'd like to implement a ADO.NET/WCF DataService and I am wondering what's the best way to setup a project in VS2008 SP1 for this purpose. Currently I have an ASP.NET web application project (not of "WebSite" project type). The data access layer is an Entity model (EF version 1) with SQL Server database. I have the Entity Model in a separate DLL project and the web application project references to this assembly for all data accesses. The ADO.NET/WCF DataService needs to communicate with the Entity model/database as well. It has to be hosted on the same web server (IIS 7.5) together with the web application. Since the DataService is not directly related to that specific web application (though it will provide and modify data from/in the same database the web application uses as well) my basic idea was to separate the DataService in its own new project (which also references the Entity Model DLL). Now I have seen that there is no project type "ADO.NET/WCF DataService" in VS2008 SP1. It seems only possible to add a DataService as an element to other existing projects, for instance Web Application projects. Why isn't there a separate DataService project type? Does this mean now that I have to add the DataService as an element to my Web Application project? Or shall I create a new Web Application project and add a DataService to it? (I could delete the pregenerated default.aspx since I do not need any web pages in this project.) What's the best way? Thank you for suggestions in advance!

    Read the article

  • Reading CSV files in numpy where delimiter is ","

    - by monch1962
    Hello all, I've got a CSV file with a format that looks like this: "FieldName1", "FieldName2", "FieldName3", "FieldName4" "04/13/2010 14:45:07.008", "7.59484916392", "10", "6.552373" "04/13/2010 14:45:22.010", "6.55478493312", "9", "3.5378543" ... Note that there are double quote characters at the start and end of each line in the CSV file, and the "," string is used to delimit fields within each line. When I try to read this into numpy via: import numpy as np data = np.genfromtxt(csvfile, dtype=None, delimiter=',', names=True) all the data gets read in as string values, surrounded by double-quote characters. Not unreasonable, but not much use to me as I then have to go back and convert every column to its correct type When I use delimiter='","' instead, everything works as I'd like, except for the 1st and last fields. As the start of line and end of line characters are a single double-quote character, this isn't seen as a valid delimiter for the 1st and last fields, so they get read in as e.g. "04/13/2010 14:45:07.008 and 6.552373" - note the leading and trailing double-quote characters respectively. Because of these redundant characters, numpy assumes the 1st and last fields are both String types; I don't want that to be the case Is there a way of instructing numpy to read in files formatted in this fashion as I'd like, without having to go back and "fix" the structure of the numpy array after the initial read?

    Read the article

  • DDD and Client/Server apps

    - by Christophe Herreman
    I was wondering if any of you had successfully implemented DDD in a Client/Server app and would like to share some experiences. We are currently working on a smart client in Flex and a backend in Java. On the server we have a service layer exposed to the client that offers CRUD operations amongst some other service methods. I understand that in DDD these services should be repositories and services should be used to handle use cases that do not fit inside a repository. Right now, we mimic these services on the client behind an interface and inject implementations (Webservices, RMI, etc) via an IoC container. So some questions arise: should the server expose repositories to the client or do we need to have some sort of a facade (that is able to handle security for instance) should the client implement repositories (and DDD in general?) knowing that in the client, most of the logic is view related and real business logic lives on the server. All communication with the server happens asynchronously and we have a single threaded programming model on the client. how about mapping client to server objects and vice versa? We tried DTO's but reverted back to exposing the state of our objects and mapping directly to them. I know this is considered bad practice, but it saves us an incredible amount of time) In general I think a new generation of applications is coming with the growth of Flex, Silverlight, JavaFX and I'm curious how DDD fits into this.

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • How can I render a list of objects using DisplayFor but from the controller in ASP.NET MVC?

    - by Darragh
    Here's the scenaio, I have an Employee object and a Company object which has a list of employees. I have Company.aspx which inherits from ViewPage<Company>. In Company.aspx I call Html.DisplayFor(m => m.Employees). I have an Employee.ascx partial view which inherits from ViewUserControl<Employee in my DisplayTemplates folder. Everything works fine and Company.aspx renders the Employee.ascx partial for each employee. Now I have two additional methods on my controller called GetEmployees and GetEmployee(Id). In the GetEmployee(Id) action I want to return the markup to display this one employee, and in GetEmployees() I want to render the markup to display all the employees (these two action methods will be called via AJAX). In the GetEmployee action I call return PartialView("DisplayTemplates\Employee", employee) This works, although I'd prefer something like return PartialViewFor(employee) which would determine the view name by convention. Anwyay, my question is how should I implement the GetEmployees() action? I don't want to create any more views, because frankly, I don't see why I should have to. I've tried the following which fails miserably :) return Content(New HtmlHelper<IList<Of DebtDto>>(null, null).DisplayFor(m => debts)); However if I could create an instance of an HtmlHelper object in my controller, I suppose I could get it to work, but it feels wrong. Any ideas? Have i missed something obvious?

    Read the article

  • Embedded SQL in OO languages like Java

    - by Steve De Caux
    One of the things that annoys me working with SQL in OO languages is having to define SQL statements in strings. When I used to work on IBM mainframes, the languages used an SQL preprocessor to parse SQL statements out of the native code, so the statements could be written in cleartext SQL without the obfuscation of strings, for instance in Cobol there is a EXEC SQL .... END-EXEC syntax construct that allows pure SQL statements to be embedded in the Cobol code. <pure cobol code, including assignment of value to local variable HOSTVARIABLE> EXEC SQL SELECT COL_A, COL_B, COL_C INTO :COLA, :COLB, :COLC FROM TAB_A WHERE COL_D = :HOSTVARIABLE END_EXEC <more cobol code, variables COLA, COLB, COLC have been set> ...this makes the SQL statement really easy to read & check for errors. Between the EXEC SQL .... END-EXEC tokens there are no constraints on indentation, linebreaking etc., so you can format the SQL statement according to taste. Note that this example is for a single-row select, when a multiple-row resultset is expected, the coding is different (but still v. easy to read). So, taking Java as an example What made the "old COBOL" approach undesirable ? Not only SQL, but system calls could be made much more readable with that approach. Let's call it the embedded foreign language preprocessor approach. Would an embedded foreign language preprocessor for SQL be useful to implement ? Would you see a benefit in being able to write native SQL statements inside java code ? Edit I'm really asking if you think SQL in OO languages is a throwback, and if not then what could be done to make it better.

    Read the article

  • Getting text between quotes using regular expression

    - by Camsoft
    I'm having some issues with a regular expression I'm creating. I need a regex to match against the following examples and then sub match on the first quoted string: Input strings ("Lorem ipsum dolor sit amet, consectetur adipiscing elit.") ('Lorem ipsum dolor sit amet, consectetur adipiscing elit. ') ('Lorem ipsum dolor sit amet, consectetur adipiscing elit. ', 'arg1', "arg2") Must sub match Lorem ipsum dolor sit amet, consectetur adipiscing elit. Regex so far: \((["'])([^"']+)\1,?.*\) The regex does a sub match on the text between the first set of quotes and returns the sub match displayed above. This is almost working perfectly, but the problem I have is that if the quoted string contains quotes in the text the sub match stops at the first instance, see below: Failing input strings ("Lorem ipsum dolor \"sit\" amet, consectetur adipiscing elit.") Only sub matches: Lorem ipsum dolor ("Lorem ipsum dolor 'sit' amet, consectetur adipiscing elit.") The entire match fails. Notes The input strings are actually php code function calls. I'm writing a script that will scan .php source files for a specific function and grab the text from the first parameter.

    Read the article

  • Safari Extension Questions

    - by Rob Wilkerson
    I'm in the process of building my first Safari extension--a very simple one--but I've run into a couple of problems. The extension boils down to a single, injected script that attempts to bypass the native feed handler and redirect to an http:// URI. My issues so far are twofold: The "whitelist" isn't working the way I'd expect. Since all feeds are shown under the "feed://" protocol, I've tried to capture that in the whitelist as "feed://*/*" (with nothing in the blacklist), but I end up in a request loop that I can't understand. If I set blacklist values of "http://*/*" and "https://*/*", everything works as expected. I can't figure out how to access my settings from my injected script. The script creates a beforeload event handler, but can't access my settings using the safari.extension.settings path indicated in the documentation. I haven't found anything in Apple's documentation to indicate that settings shouldn't be available from my script. Since extensions are such a new feature, even Google returns limited relevant results and most of those are from the official documentation. What am I missing? Thanks.

    Read the article

  • Synchronization of Nested Data Structures between Threads in Java

    - by Dominik
    I have a cache implementation like this: class X { private final Map<String, ConcurrentMap<String, String>> structure = new HashMap...(); public String getValue(String context, String id) { // just assume for this example that there will be always an innner map final ConcurrentMap<String, String> innerStructure = structure.get(context); String value = innerStructure.get(id); if(value == null) { synchronized(structure) { // can I be sure, that this inner map will represent the last updated // state from any thread? value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } } return value; } } Is this implementation thread-safe? Can I be sure to get the last updated state from any thread inside the synchronized block? Would this behaviour change if I use a java.util.concurrent.ReentrantLock instead of a synchronized block, like this: ... if(lock.tryLock(3, SECONDS)) { try { value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } finally { lock.unlock(); } } ... I know that final instance members are synchronized between threads, but is this also true for the objects held by these members? Maybe this is a dumb question, but I don't know how to test it to be sure, that it works on every OS and every architecture.

    Read the article

  • HttpModule.Init - safely add HttpApplication.BeginRequest handler in IIS7 integrated mode

    - by Paul Smith
    My question is similar but not identical to: http://stackoverflow.com/questions/1123741/why-cant-my-host-softsyshosting-com-support-beginrequest-and-endrequest-event (I've also read the mvolo blog referenced therein) The goal is to successfully hook HttpApplication.BeginRequest in the IHttpModule.Init event (or anywhere internal to the module), using a normal HttpModule integrated via the system.webServer config, i.e. one that doesn't: invade Global.asax or override the HttpApplication (the module is intended to be self-contained & reusable, so e.g. I have a config like this): <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="TheHttpModule" /> <add name="TheHttpModule" type="Company.HttpModules.TheHttpModule" preCondition="managedHandler" /> So far, any strategy I've tried to attach a listener to HttpApplication.BeginRequest results in one of two things, symptom 1 is that BeginRequest never fires, or symptom 2 is that the following exception gets thrown on all managed requests, and I cannot catch & handle it from user code: Stack Trace: [NullReferenceException: Object reference not set to an instance of an object.] System.Web.PipelineModuleStepContainer.GetEventCount(RequestNotification notification, Boolean isPostEvent) +30 System.Web.PipelineStepManager.ResumeSteps(Exception error) +1112 System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) +113 System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +616 Commenting out app.BeginRequest += new EventHandler(this.OnBeginRequest) in Init stops the exception of course. Init does not reference the Context or Request objects at all. I have tried: Removed all references to HttpContext.Current anywhere in the project (still symptom 1) Tested removing all code from the body of my OnBeginRequest method, to ensure the problem wasn't internal to the method (= exception) Sniffing the stack trace and only calling app.BeginRequest+=... when if the stack isn't started by InitializeApplication (= BeginRequest not firing) Only calling app.BeginRequest+= on the second pass through Init (= BeginRequest not firing) Anyone know of a good approach? Is there some indirect strategy for hooking Application_Start within the module (seems unlikely)? Another event which a) one can hook from a module's constructor or Init method, and b) which is subsequently a safe place to attach BeginRequest event handlers? Thanks much

    Read the article

  • How to use Svn Version Task to set the Version of a vb project

    - by SchlaWiener
    I have a Visual Studio 2008 Solution where the main output exe is a VB.Net Winforms exe which has several VB.Net and C# dll's linked from the same solution. The whole solution is under version control with subversion. Now I want to automagically update by generated files with the current svn revision number. For this purpose I found this neat project: http://svnversiontasks.codeplex.com/ You also need the MSBuild.Communuity.Tasks for this to work. There was a msbuild example on how to update the rev number for every single project in your solution which I use: <Import Project="$(MSBuildExtensionsPath)\SvnTools.Targets\SvnTools.Tasks.VersionManagement.Tasks" /> <Target Name="build"> <CreateItem Include="../**/AssemblyInfo.vb;../**/AssemblyInfo.cs;../**/Properties/AssemblyInfo.cs"> <Output TaskParameter="Include" ItemName="AssemblyInfoFiles" /> </CreateItem> <CreateItem Include="../**/*.vdproj;*.vdproj"> <Output TaskParameter="Include" ItemName="DeploymentProjectFiles" /> </CreateItem> <UpdateVersion AssemblyInfoFiles="@(AssemblyInfoFiles)" DeploymentProjectFiles="@(DeploymentProjectFiles)" Format="yyyy.mm.dd.rev" /> <Exec Command="&quot;$(VS90COMNTOOLS)..\IDE\devenv&quot; ..\MyApp.sln /build" /> <RevertVersionChange AssemblyInfoFiles="@(AssemblyInfoFiles)" DeploymentProjectFiles="@(DeploymentProjectFiles)" /> </Target> I modified the original file to also include the AssemblyInfo.vb file and saved it as a msbuild.proj file. However if I execute msbuild from the console I see that the C# projects are updated (I can also confirm that from the properties of the output dll but my vb project remains unchanged: Reverting version number change: ../App1\AssemblyInfo.vb Updating version number (to rev 0) for file: ../App1\AssemblyInfo.vb D:\Source\MyApp\MyAppDeploy\MyAppDeploy.csproj : warning : Version attribute not found, file not updated. Reverting version number change: ../App2\Properties\AssemblyInfo.cs Updating version number (to rev 0) for file: ../App2\Properties\AssemblyInfo.cs Successfully updated file. Maybe the task does not support VB.Net. But maybe someone has a solution for this...

    Read the article

  • ActiveMQ broker configuration error when specifying persistenceAdapter: "One of '{WC[##other:"http:/

    - by Joe
    I am setting up a simple ActiveMQ embedded broker. It works fine, until I try to configure a persistence adapter. I am basically just copying the configuration from http://activemq.apache.org/persistence.html#Persistence-ConfiguringKahaPersistence. When I add this configuration to my Spring configuration, like so: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:amq="http://activemq.apache.org/schema/core" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.0.xsd"> <amq:broker useJmx="true" persistent="true" brokerName="localhost"> <amq:transportConnectors> <amq:transportConnector name="vm" uri="vm://localhost"/> </amq:transportConnectors> <amq:persistenceAdapter> <amq:kahaPersistenceAdapter directory="activemq-data" maxDataFileLength="33554432"/> </amq:persistenceAdapter> </amq:broker> </beans> I get the error: cvc-complex-type.2.4.a: Invalid content was found starting with element 'amq:persistenceAdapter'. One of '{WC[##other:"http://activemq.apache.org/schema/core"]}' is expected. When I take out the amq:persistenceAdapter element, it works fine. The same error happens no matter which persistence adapter I include in the body, e.g. jdbc, journal, etc. Any help would be greatly appreciated. Thanks.

    Read the article

  • Accessing frame info in gdb

    - by Maelstrom
    In gdb, is there a way to access the contents of info frame in a script? I'm debugging a problem somewhere between Apache, PHP, APC and my own code, and I have about a hundred cores to choose from. Following the instructions here http://bugs.php.net/bugs-generating-backtrace.php I end up with a stacktrace like: #0 0x0121a31a in do_bind_function (opline=0xa94dd750, function_table=0x9b9cf98, compile_time=0 '\0') at /usr/src/debug/php-5.2.7/Zend/zend_compile.c:2407 #1 0x0124bb2e in ZEND_DECLARE_FUNCTION_SPEC_HANDLER (execute_data=0xbfef7990) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:498 #2 0x01249dfa in execute (op_array=0xb79d5d3c) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:92 #3 0x01261e31 in ZEND_INCLUDE_OR_EVAL_SPEC_VAR_HANDLER (execute_data=0xbfef80d0) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:7809 #4 0x01249dfa in execute (op_array=0xb79d55ec) at /usr/src/debug/php-5.2.7/Zend/zend_vm_execute.h:92 ... #26 0x09caa894 in ?? () #27 0x00000000 in ?? () The stack will always look similar, with function execute and ZEND_something interleaved several times. I need to go up to the last instance of execute (up 2 in this case) and print myVar. Obviously gdb knows the function names, but does it surface them in any user variables I could access? Typing frame 2 shows a one-line version, and info frame shows a single stackframe in detail. I want to do something like while ($current_frame.function_name != "execute") {up;} print myVar but I don't see how to do it strictly within gdb. Is there a variable / structure / special memory location / something that allows access to gdb's information on either the whole stack (like bt) or to the current stack frame (like info frame)?

    Read the article

  • What makes these two R data frames not identical?

    - by Matt Parker
    UPDATE: I remembered dput() about the time Sharpie mentioned it. It's probably the row names. Back in a moment with an answer. I have two small data frames, this_tx and last_tx. They are, in every way that I can tell, completely identical. this_tx == last_tx results in a frame of identical dimensions, all TRUE. this_tx %in% last_tx, two TRUEs. Inspected visually, clearly identical. But when I call identical(this_tx, last_tx) I get a FALSE. Hilariously, even identical(str(this_tx), str(last_tx)) will return a TRUE. If I set this_tx <- last_tx, I'll get a TRUE. What is going on? I don't have the deepest understanding of R's internal mechanics, but I can't find a single difference between the two data frames. If it's relevant, the two variables in the frames are both factors - same levels, same numeric coding for the levels, both just subsets of the same original data frame. Converting them to character vectors doesn't help. Background (because I wouldn't mind help on this, either): I have records of drug treatments given to patients. Each treatment record essentially specifies a person and a date. A second table has a record for each drug and dose given during a particular treatment (usually, a few drugs are given each treatment). I'm trying to identify contiguous periods during which the person was taking the same combinations of drugs at the same doses. The best plan I've come up with is to check the treatments chronologically. If the combination of drugs and doses for treatment[i] is identical to the combination at treatment[i-1], then treatment[i] is a part of the same phase as treatment[i-1]. Of course, if I can't compare drug/dose combinations, that's right out.

    Read the article

  • Extract <name> attribute from KML

    - by Ozaki
    I am using OpenLayers for a mapping service. In which I have several KML layers that are using KML feeds from server to populate data on the map. It currently plots: images / points / vector lines & shapes. But on these points it will not add a label with the value of the for the placemark in the KML. What I have currently tried is: ////////////////////KML Feed for * Layer// var surveylinelayer = new OpenLayers.Layer.Vector("First KML Layer", { projection: new OpenLayers.Projection("EPSG:4326"), strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: firstKMLURL, format: new OpenLayers.Format.KML({ extractStyles: true, extractAttributes: true }) }), styleMap: new OpenLayers.StyleMap({ "default": KMLStyle }) }); then the Style as follows: var KMLStyle = new OpenLayers.Style({ //label: "${name}", // This method will display nothing fillOpacity: 1, pointRadius: 10, fontColor: "#7E3C1C", fontSize: "13px", fontFamily: "Courier New, monospace", fontWeight: "strong", labelXOffset: "0", labelYOffset: "-15" }, { //dynamic label context: { label: function(feature) { return "Feature Name: " + feature.attributes.name; // also displays nothing } } }); Example of the KML: <Placemark> <name>POI1</name> <Style> <LabelStyle> <color>ffffffff</color> </LabelStyle> </Style> <Point> <coordinates>0.000,0.000</coordinates> </Point> </Placemark> When debugging I just hit "feature is undefined" and am unsure why it would be undefined in this instance?

    Read the article

< Previous Page | 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099  | Next Page >