Search Results

Search found 9492 results on 380 pages for 'logic unit'.

Page 315/380 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • I need a fast runtime expression parser

    - by Chris Lively
    I need to locate a fast, lightweight expression parser. Ideally I want to pass it a list of name/value pairs (e.g. variables) and a string containing the expression to evaluate. All I need back from it is a true/false value. The types of expressions should be along the lines of: varA == "xyz" and varB==123 Basically, just a simple logic engine whose expression is provided at runtime. UPDATE At minimum it needs to support ==, !=, , =, <, <= Regarding speed, I expect roughly 5 expressions to be executed per request. We'll see somewhere in the vicinity of 100/requests a second. Our current pages tend to execute in under 50ms. Usually there will only be 2 or 3 variables involved in any expression. However, I'll need to load approximately 30 into the parser prior to execution. UPDATE 2012/11/5 Update about performance. We implemented nCalc nearly 2 years ago. Since then we've expanded it's use such that we average 40+ expressions covering 300+ variables on post backs. There are now thousands of post backs occurring per second with absolutely zero performance degradation. We've also extended it to include a handful of additional functions, again with no performance loss. In short, nCalc met all of our needs and exceeded our expectations.

    Read the article

  • Calling services from the Orchestrating layer in SOA?

    - by Martin Lee
    The Service Oriented Architecture Principles site says that Service Composition is an important thing in SOA. But Service Loose Coupling is important as well. Does that mean that the "Orchestrating layer" should be the only one that is allowed to make calls to services in the system? As I understand SOA, the "Orchestrating layer" 'glues' all the services together into one software application. I tried to depict that on Fig.A and Fig.B. The difference between the two is that on Fig.A the application is composed of services and all the logic is done in the "Orchestrating layer" (all calls to services are done from the "Orchestrating layer" only). On Fig.B the application is composed from services, but one service calls another service. Does the architecture on Fig.B violate the "Service Loose Coupling" principle of SOA? Can a service call another service in SOA? And more generally, can the architecture on Fig.A be considered superior to the one on Fig.B in terms of service loose coupling, abstraction, reusability, autonomy, etc.? My guess is that the A architecture is much more universal, but it can add some unnecessary data transfers between the "Orchestrating layer" and all the called services.

    Read the article

  • Exporting WPF DataGrid to a text file in a nice looking format. Text file must be easy to read.

    - by Andrew
    Hi, Is there a simple way to export a WPF DataGrid to a text file and a .csv file? I've done some searching and can't see that there is a simple DataGrid method to do so. I have got something working (barely) by making use of the ItemsSource of the DataGrid. In my case, this is an array of structs. I do something with StreamWriters and StringBuilders (details available) and basically have: StringBuilder destination = new StringBuilder(); destination.Append(row.CreationDate); destination.Append(seperator); // seperator is '\t' or ',' for csv file destination.Append(row.ProcId); destination.Append(seperator); destination.Append(row.PartNumber); destination.Append(seperator); I do this for each struct in the array (in a loop). This works fine. The problem is that it's not easy to read the text file. The data can be of different lengths within the same column. I'd like to see something like: 2007-03-03 234238423823 823829923 2007-03-03 66 99 And of course, I get something like: 2007-03-03 234238423823 823829923 2007-03-03 66 99 It's not surprising giving that I'm using Tab delimiters, but I hope to do better. It certainly is easy to read in the DataGrid! I could of course have some logic to pad short values with spaces, but that seems quite messy. I thought there might be a better way. I also have a feeling that this is a common problem and one that has probably been solved before (hopefully within the .NET classes themselves). Thanks.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • How to manage data access / preloading efficiently using web services in C# ?

    - by Amadeus45
    Hello all, Ok, this is very "generic" question. We currently have a SQL Server database for which we need to develop an application in ASP.NET with will contain all the business logic in C# Web Services. The thing is that, architecturally speaking, I'm not sure how to design the web service and the data management. There are many things to consider : We need have very rapid access to data. Right now, we have over a million "sales" and "purchases" record from which we need to often calculate and load the current stock for a given day according to a serie of parameter. I'm not sure how we should preload the data and keep the data in the Web Service. Doing a stock calculation within a SQL query will be very lengthy. They currently have a stock calculation application that preloads all sales and purchases for the day and afterwards calculate the stock on the code-side. We want to develop powerful reporting tools. We want to implement a "pivot table" but not sure how to implement it and have good performances. For the reasons above, I'm not sure how to design the data model. Anybody can give me any guidelines on how to start, or from their personnal experiences (what have you done in the past ?) I'm not sure if it's possible to make a bounty even though the question is new (I'd put 300 rep on it, since I really need something). If you know how, let me know. Thanks

    Read the article

  • How to get application context path in spring-ws?

    - by Dhaliwal
    I am using Spring-WS to create a webservice. In my project, I have created a Helper class to reads sample response and request xml file which are located in my /src/main/resource folder. When I am unit-testing my webservice application 'locally', I use the System.getProperty("user.dir") to get the application context folder. The following is a method that I created in the Helper class to help me retrieve the file that I am interested in my resource folder. public static File getFileFromResources(String filename) { System.out.println("Getting file from resource folder"); File request = null; String curDir = System.getProperty("user.dir"); String contextpath = "C:\\src\\main\\resources\\"; request = new File(curDir + contextpath + filename); return request; } However, after 'publishing' the compiled WAR file to the ../webapps folder to the Apache Tomcat directory, I realise that System.getProperty("user.dir") no longer returns my application context path. Instead, it is returning the Apache Tomcat root directory as shown C:\Program Files\Apache Software Foundation\Tomcat 6.0\src\main\resources\SampleClientFile I cant seem to find any information about getting the root folder of my webservice. I have seen examples on Spring web application where I can retrieve the context path by using the following : request.getSession().getServletContext().getContextPath() But in this case, I am using a Spring web application where there is a servlet context in the request. But the Spring-WS, my entry point is an endpoint. How can I get the context path of my webservice application. I am expecting a context path of something like C:\Program Files\Apache Software Foundation\Tomcat 6.0\webapps\clientWebService\WEB-INF\classes Could someone suggest a way to achieve this?

    Read the article

  • Content Management Systems for Adaptive Content [closed]

    - by andrewap
    Content management systems (CMS) allow us to easily maintain blogs, news sites, general websites, and so on. Many of them are designed to manage pages of content, and provide tools to organize and customize how that content is displayed on the web. However, as explained by Mark Boulton in his Adaptive Content Management article, and by Karen McGrane in her talk on Adapting Ourselves to Adaptive Content, we are increasingly delivering content not just to the web, but also to other platforms and channels. We need tools to manage pieces of content with meaningful metadata attached. Create once, publish everywhere. The main idea is to store content cleanly, without intertwining it with presentation markup specific to the web. Because pieces of content is compartmentalized semantically, it can easily adapt to fit in different platforms and channels. Hence, it's called adaptive content. Let's look at a quick example to compare: Say I manage news articles and events. To create a news article, I would tell the CMS the type of content I'm creating, and be asked to fill in a form with individual fields tailored to news articles (e.g. headline, subtitle, full text, short snippet, and images). — i.e. pieces of content With a traditional web publishing tool, I would probably have had to create a new page under News, and then type in and format the news article in a blank WYSIWYG text editor. — i.e. pages of content As you can see, the first design allows me to individually specify content in its smallest semantic unit. When I want to display or consume it, the system can easily provide the pieces I need. So here's my question: Is there a CMS that is designed specifically with adaptive content in mind, and that is decoupled with the presentation layer? Note: This is not a discussion about the best CMS, or which CMS I should use. I am asking whether a very specific type of tool — CMS designed for adaptive content — exists for developers to use.

    Read the article

  • How to jQuery plugin effects to items created later in the DOM

    - by Dan Denney
    How do I apply jQuery plugin effects to items that are created by an event? So, what I am attempting to do is have all uls throughout a site use the jquery.scroll.js plugin for styling. I currently have it working on all active uls. However, there are some category/subcategory sections. I am still new to writing any jQuery on my own and there is a leap in logic that I don't understand on applying plugins to items when they are loaded later. In the real example, users click a category from a ul and a sub category is loaded via Rails and JSON. For a light sample, I made a jsfiddle that simulates the issue that I run into. The category ul is styled correctly, but the subcategory ul doesn't pick up the styling. I'm hoping for some assistance not just in whipping up some code to fix it, but in pointing me in the direction of what I need to learn for this functionality. http://jsfiddle.net/dandenney/25R8F/1/ Thank you in advance!

    Read the article

  • How to load entities into readonly collections using the entity framework

    - by Anton P
    I have a POCO domain model which is wired up to the entity framework using the new ObjectContext class. public class Product { private ICollection<Photo> _photos; public Product() { _photos = new Collection<Photo>(); } public int Id { get; set; } public string Name { get; set; } public virtual IEnumerable<Photo> Photos { get { return _photos; } } public void AddPhoto(Photo photo) { //Some biz logic //... _photos.Add(photo); } } In the above example i have set the Photos collection type to IEnumerable as this will make it read only. The only way to add/remove photos is through the public methods. The problem with this is that the Entity Framework cannot load the Photo entities into the IEnumerable collection as it's not of type ICollection. By changing the type to ICollection will allow callers to call the Add mentod on the collection itself which is not good. What are my options?

    Read the article

  • Is WordPress MVC compliant?

    - by kovshenin
    Some people consider WordPress a blogging platform, some think of it as a CMS, some refer to WordPress as a development framework. Whichever it is, the question still remains. Is WordPress MVC compliant? I've read the forums and somebody asked about MVC about three years ago. There were some positive answers, and some negative ones. While nobody knows exactly what MVC is and everybody thinks of it in their own way, there's still a general concept that's present in all the discussions. I have little experience with MVC frameworks and there doesn't seem to be anything about the framework itself. Most of the MVC is done by the programmer, am I right? Now, going back to WordPress, could we consider the core rewrite engine (WP_Rewrite) the controller? Queries & plugin logic as the model? And themes as the view? Or am I getting it all wrong? Thanks ;)

    Read the article

  • Using SQL dB column as a lock for concurrent operations in Entity Framework

    - by Sid
    We have a long running user operation that is handled by a pool of worker processes. Data input and output is from Azure SQL. The master Azure SQL table structure columns are approximated to [UserId, col1, col2, ... , col N, beingProcessed, lastTimeProcessed ] beingProcessed is boolean and lastTimeProcessed is DateTime. The logic in every worker role is: public void WorkerRoleMain() { while(true) { try { dbContext db = new dbContext(); // Read foreach (UserProfile user in db.UserProfile .Where(u => DateTime.UtcNow.Subtract(u.lastTimeProcessed) > TimeSpan.FromHours(24) & u.beingProcessed == false)) { user.beingProcessed = true; // Modify db.SaveChanges(); // Write // Do some long drawn processing here ... ... ... user.lastTimeProcessed = DateTime.UtcNow; user.beingProcessed = false; db.SaveChanges(); } } catch(Exception ex) { LogException(ex); Sleep(TimeSpan.FromMinutes(5)); } } // while () } With multiple workers processing as above (each with their own Entity Framework layer), in essence beingProcessed is being used a lock for MutEx purposes Question: How can I deal with concurrency issues on the beingProcessed "lock" itself based on the above load? I think read-modify-write operation on the beingProcessed needs to be atomic but I'm open to other strategies. Open to other code refinements too.

    Read the article

  • Advice on using .Net WorkFlow State Machine. What would you do?

    - by jlafay
    So I've been tasked at work to write windows services to replace some old legacy VB6 WinForms apps currently running as services, consistently repeating tasks day-to-day. To give some general background, they have there own state machines built in to handle decision basing and not utilizing threading. A lot of the senior developers here thought it would be worth a try to look into WorkFlow to replace the state machines rather than write my own business logic and try threading it programmaticly. So it's WF vs. the "Old College Try" I suppose. My concern is that there aren't many books on the topic, and since it was implemented in .Net I've heard very little about it being used. I brought this up at work and another developer mentioned that it's because Biz Talk never really caught on and it was designed for that. So is it broken? Do you think it will be supported long enough to not worry so much? I don't want an ill-functioning process injected into my services, my new babies at work, and then have WF's keel over. Leaving me with having to replace them with my own code in the event of an emergency; which does not seem like much of a grand scenario to me. Any suggestions, recommendations would be super.

    Read the article

  • FluentNHibernate mapping of composite foreign keys

    - by Faron
    I have an existing database schema and wish to replace the custom data access code with Fluent.NHibernate. The database schema cannot be changed since it already exists in a shipping product. And it is preferable if the domain objects did not change or only changed minimally. I am having trouble mapping one unusual schema construct illustrated with the following table structure: CREATE TABLE [Container] ( [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container] PRIMARY KEY ( [ContainerId] ASC ) ) CREATE TABLE [Item] ( [ItemId] [uniqueidentifier] NOT NULL, [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC ) ) CREATE TABLE [Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Item_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [ItemId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item_Property] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Container_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) The existing domain model has the following class structure: The Property class contains other members representing the property's name and value. The ContainerProperty and ItemProperty classes have no additional members. They exist only to identify the owner of the Property. The Container and Item classes have methods that return collections of ContainerProperty and ItemProperty respectively. Additionally, the Container class has a method that returns a collection of all of the Property objects in the object graph. My best guess is that this was either a convenience method or a legacy method that was never removed. The business logic mainly works with Item (as the aggregate root) and only works with a Container when adding or removing Items. I have tried several techniques for mapping this but none work so I won't include them here unless someone asks for them. How would you map this?

    Read the article

  • How to Perform Continues Iteration over Shared Dictionary in Multi-threaded Environment

    - by Mubashar Ahmad
    Dear Gurus. Note Pls do not tell me regarding alternative to custom session, Pls answer only relative to the Pattern Scenario I have Done Custom Session Management in my application(WCF Service) for this I have a Dictionary shared to all thread. When a specific function Gets called I add a New Session and Issue SessionId to the client so it can use that sessionId for rest of his calls until it calls another specific function, which terminates this session and removes the session from the Dictionary. Due to any reason Client may not call session terminator function so i have to implement time expiration logic so that i can remove all such sessions from the memory. For this I added a Timer Object which calls ClearExpiredSessions function after the specific period of time. which iterates on the dictionary. Problem: As this dictionary gets modified every time new client comes and leaves so i can't lock the whole dictionary while iterating over it. And if i do not lock the dictionary while iteration, if dictionary gets modified from other thread while iterating, Enumerator will throw exception on MoveNext(). So can anybody tell me what kind of Design i should follow in this case. Is there any standard pattern available.

    Read the article

  • SQL Query Math Gymnastics

    - by keruilin
    I have two tables of concern here: users and race_weeks. User has many race_weeks, and race_week belongs to User. Therefore, user_id is a fk in the race_weeks table. I need to perform some challenging math on fields in the race_weeks table in order to return users with the most all-time points. Here are the fields that we need to manipulate in the race_weeks table. races_won (int) races_lost (int) races_tied (int) points_won (int, pos or neg) recordable_type(varchar, Robots can race, but we're only concerned about type 'User') Just so that you fully understand the business logic at work here, over the course of a week a user can participate in many races. The race_week record represents the summary results of the user's races for that week. A user is considered active for the week if races_won, races_lost, or races_tied is greater than 0. Otherwise the user is inactive. So here's what we need to do in our query in order to return users with the most points won (actually net_points_won): Calculate each user's net_points_won (not a field in the DB). To calculate net_points, you take (1000 * count_of_active_weeks) - sum(points__won). (Why 1000? Just imagine that every week the user is spotted a 1000 points to compete and enter races. We want to factor-out what we spot the user because the user could enter only one race for the week for 100 points, and be sitting on 900, which we would skew who actually EARNED the most points.) This one is a little convoluted, so let me know if I can clarify further.

    Read the article

  • Parallel.Foreach loop creating multiple db connections throws connection errors?

    - by shawn.mek
    Login failed. The login is from an untrusted domain and cannot be used with Windows authentication I wanted to get my code running in parallel, so I changed my foreach loop to a parallel foreach loop. It seemed simple enough. Each loop connects to the database, looks up some stuff, performs some logic, adds some stuff, closes the connection. But I get the above error? I'm using my local sql server and entity framework (each loop uses it's own context). Is there some problem with connecting multiple times using the same local login or something? How did I get around this? I have (before trying to covert to a parallel.foreach loop) split my list of objects that I am foreach looping through into four groups (separate csv files) and run four concurrent instances of my program (which ran faster overall than just one, thus the idea for parallel). So it seems connecting to the db shouldn't be a problem? Any ideas? EDIT: Here's before var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); ForEach(cloneIdAndAccessions in allAccessionsFromObs) DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); after var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); Parallel.ForEach(allAccessionsFromObs, cloneIdAndAccessions => DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); Inside the DoWork I use the BioEntities using (var bioEntities = new BioEntities(connectionString)) {...}

    Read the article

  • NHibernate listener/event to replace object before insert/update

    - by vIceBerg
    Hi! I have a Company class which have a collection of Address. Here's my Address class:(written top of my head): public class Address { public string Civic; public string Street; public City City; } This is the City class: public class City { public int Id; public string Name; public string SearchableName{ get { //code } } } Address is persisted in his own table and have a reference to the city's ID. City are also persisted in is own table. The City's SearchableName is used to prevent some mistakes in the city the user type. For example, if the city name is "Montréal", the searchable name will be "montreal". If the city name is "St. John", the searchable name will be "stjohn", etc. It's used mainly to to search and to prevent having multiple cities with a typo in it. When I save an address, I want an listener/event to check if the city is already in the database. If so, cancel the insert and replace the user's city with the database one. I would like the same behavior with updates. I tried this: public bool OnPreInsert(PreInsertEvent @event) { City entity = (@event.Entity as City); if (entity != null) { if (entity.Id == 0) { var city = (from c in @event.Session.Linq<City>() where c.SearchableName == entity.SearchableName select c).SingleOrDefault(); if (city != null) { //don't know what to do here return true; } } } return false; } But if there's already a City in the database, I don't know what to do. @event.Entity is readonly, if I set @event.Entity.Id, I get an "null identifier" exception. I tried to trap insert/update on Address and on Company, but the City if the first one to get inserted (it's logic...) Any thoughts? Thanks

    Read the article

  • Finding Local IP via Socket Creation / getsockname

    - by BSchlinker
    I need to get the IP address of a system within C++. I followed the logic and advice of another comment on here and created a socket and then utilized getsockname to determine the IP address which the socket is bound to. However, this doesn't appear to work (code below). I'm receiving an invalid IP address (58.etc) when I should be receiving a 128.etc Any ideas? string Routes::systemIP(){ // basic setup int sockfd; char str[INET_ADDRSTRLEN]; sockaddr* sa; socklen_t* sl; struct addrinfo hints, *servinfo, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; if ((rv = getaddrinfo("4.2.2.1", "80", &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return "1"; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return "2"; } // get information on the local IP from the socket we created getsockname(sockfd, sa, sl); // convert the sockaddr to a sockaddr_in via casting struct sockaddr_in *sa_ipv4 = (struct sockaddr_in *)sa; // get the IP from the sockaddr_in and print it inet_ntop(AF_INET, &(sa_ipv4->sin_addr.s_addr), str, INET_ADDRSTRLEN); printf("%s\n", str); // return the IP return str; }

    Read the article

  • C# delegate or Func for 'all methods'?

    - by Michel
    Hi, i've read something about Func's and delegates and that they can help you to pass a method as a parameter. Now i have a cachingservice, and it has this declaration: public static void AddToCache<T>(T model, double millisecs, string cacheId) where T : class public static T GetFromCache<T>(string cacheId) where T : class So in a place where i want to cache some data, i check if it exists in the cache (with GetFromCache) and if not, get the data from somewhere, and the add it to the cache (with AddToCache) Now i want to extend the AddToCache method with a parameter, which is the class+method to call to get the data Then the declaration would be like this public static void AddToCache<T>(T model, double millisecs, string cacheId, Func/Delegate methode) where T : class Then this method could check wether the cache has data or not, and if not, get the data itself via the method it got provided. Then in the calling code i could say: AddToCache<Person>(p, 10000, "Person", new PersonService().GetPersonById(1)); AddToCache<Advert>(a, 100000, "Advert", new AdvertService().GetAdverts(3)); What i want to achieve is that the 'if cache is empty get data and add to cache' logic is placed on only one place. I hope this makes sense :) Oh, by the way, the question is: is this possible?

    Read the article

  • How to trigger the event together on each two deferent class.

    - by XBasic3000
    I have two object class on a single unit, is it posible to trigger the two events? let say the FIRSTCLASS event is fired, The SECONDCLASS also will fired? Assuming...... //{Class 1}------------------------------------------------------------- type TOnEventTrigger = procedure(Sender : Tobject; Value :integer); TMyFirstClass = Class(Tcomponent) private .... public .... OnEventTrigger : TOnEventTrigger read Fevent write Fevent; end; procedure TMyFirstClass.FEvnt(Sender : Tobject; Value :integer); begin // here is normaly triggers the event // if Assigned(OnEventTrigger) then OnEventTrigger(Self,FSomevalue); // POSTMessage(GetForegroundWindow,WM_USER+3,0,0); // this is what i did here to get the result of FSomevalue // but this is not ideal. It work only on focus window. end; //{Class 2}------------------------------------------------------------- type TOnEventTrigger = procedure(Sender : Tobject; Value :integer); TMySecondClass = Class(Tobject) private .... public .... OnEventTrigger : TOnEventTrigger; read Fevent write Fevent; end; procedure TMySecondClass.FEvnt(Sender : Tobject; Value :integer); begin // I wanted here to trigger, whenenver the above is fired // if Assigned(OnEventTrigger) then OnEventTrigger(Self,FSomevalue); end;

    Read the article

  • Multiple SessionFactories in Windows Service with NHibernate

    - by Rob Taylor
    Hi all, I have a Webapp which connects to 2 DBs (one core, the other is a logging DB). I must now create a Windows service which will use the same business logic/Data access DLLs. However when I try to reference 2 session factories in the Service App and call the factory.GetCurrentSession() method, I get the error message "No session bound to current context". Does anyone have a suggestion about how this can be done? public class StaticSessionManager { public static readonly ISessionFactory SessionFactory; public static readonly ISessionFactory LoggingSessionFactory; static StaticSessionManager() { string fileName = System.Configuration.ConfigurationSettings.AppSettings["DefaultNHihbernateConfigFile"]; string executingPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase); fileName = executingPath + "\\" + fileName; SessionFactory = cfg.Configure(fileName).BuildSessionFactory(); cfg = new Configuration(); fileName = System.Configuration.ConfigurationSettings.AppSettings["LoggingNHihbernateConfigFile"]; fileName = executingPath + "\\" + fileName; LoggingSessionFactory = cfg.Configure(fileName).BuildSessionFactory(); } } The configuration file has the setting: <property name="current_session_context_class">call</property> The service sets up the factories: private ISession _session = null; private ISession _loggingSession = null; private ISessionFactory _sessionFactory = StaticSessionManager.SessionFactory; private ISessionFactory _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; ... _sessionFactory = StaticSessionManager.SessionFactory; _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; _session = _sessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_session); _loggingSession = _loggingSessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_loggingSession); So finally, I try to call the correct factory by: ISession session = StaticSessionManager.SessionFactory.GetCurrentSession(); Can anyone suggest a better way to handle this? Thanks in advance! Rob

    Read the article

  • autologin component doesn't work on remote server

    - by user606521
    I am using autologin component from http://milesj.me/code/cakephp/auto-login (v 3.5.1). It works on my localhost WAMP server but fails on remote server. I am using this settings in beforeFilter() callback: $this->AutoLogin->settings = array( // Model settings 'model' => 'User', 'username' => 'username', 'password' => 'password', // Controller settings 'plugin' => '', 'controller' => 'users', // Cookie settings 'cookieName' => 'rememberMe', 'expires' => '+1 month', // Process logic 'active' => true, 'redirect' => true, 'requirePrompt' => true ); On remote server it simply doesn't autolog users after the browser was closed. I can't figure out what may cause the problem. -------------------- edit I figured out what is causing the problem but I don't know how to fix this. First of all cookie is set like this: $this->Cookie->write('key',array('username' => 'someusername', 'hash' => 'somehash', ...) ); Then it's readed like this: $cookie = $this->Cookie->read('key'); On my WAMP server $cookie is array('username' => 'someusername', 'hash' => 'somehash', ...) and on remote server returned $cookie is string(159) "{\"username\":\"YWxlay5iYXJzsdsdZXdza2ldssd21haWwuY29t\",\"password\":\"YWxlazc3ODEy\",\"hash\":\"aa15bffff9ca12cdcgfgb351d8bfg2f370bf458\",\"time\":1339923926}" and it should be: array( 'username' => "YWxlay5iYXJzsdsdZXdza2ldssd21haWwuY29t", 'password' => "YWxlazc3ODEy", ...) Why the retuned cookie is string not array?

    Read the article

  • Is it possible to replace values ina queryset before sending it to your template?

    - by Issy
    Hi Guys, Wondering if it's possible to change a value returned from a queryset before sending it off to the template. Say for example you have a bunch of records Date | Time | Description 10/05/2010 | 13:30 | Testing... etc... However, based on the day of the week the time may change. However this is static. For example on a monday the time is ALWAYS 15:00. Now you could add another table to configure special cases but to me it seems overkill, as this is a rule. How would you replace that value before sending it to the template? I thought about using the new if tags (if day=1), but this is more of business logic rather then presentation. Tested this in a custom template tag def render(self, context): result = self.model._default_manager.filter(from_date__lte=self.now).filter(to_date__gte=self.now) if self.day == 4: result = result.exclude(type__exact=2).order_by('time') else: result = result.order_by('type') result[0].time = '23:23:23' context[self.varname] = result return '' However it still displays the results from the DB, is this some how related to 'lazy' evaluation of templates? Thanks! Update Responding to comments below: It's not stored wrong in the DB, its stored Correctly However there is a small side case where the value needs to change. So for example I have a From Date & To date, my query checks if todays date is between those. Now with this they could setup a from date - to date for an entire year, and the special cases (like mondays as an example) is taken care off. However if you want to store in the DB you would have to capture several more records to cater for the side case. I.e you would be capturing the same information just to cater for that 1 day when the time changes. (And the time always changes on the same day, and is always the same)

    Read the article

  • Looping and pausing after loading ajax content in Javascript JQuery

    - by Tristan
    I have what I though was a simple problem to solve (never is!) I'm trying to loop through a list of URL's in a javascript array I have made, load the first one, wait X seconds, then load the second, and continue until I start again. I got the array and looping working, trouble is, however I try and implement a "wait" using setInterval or similar, I have a structural issue, as the loop continues in the background. I tried to code it like this: $(document).ready(function(){ // my array of URL's var urlArray = new Array(); urlArray[0] = "urlOne"; urlArray[1] = "urlTwo"; urlArray[2] = "urlThree"; // my looping logic that continues to execute (problem starts here) while (true) { for (var i = 0; i < urlArray.length; i++) { $('#load').load(urlArray[i], function(){ // now ideally I want it to wait here for X seconds after loading that URL and then start the loop again, but javascript doesn't seem to work this way, and I'm not sure how to structure it to get the same effect }); } } });

    Read the article

  • Flex Modules vs RSL

    - by nil
    Hi, I'm a little bit confused about when is better to use Flex Modules or RSL libriaries (in Flex 3.5). My goal is split my project in several unit projects, so I can test and work separately. Let's assume I have a Customer app and Vendor app. I also have a front-end panel with two buttons. Each button launches Customer app or Vendor app. These applications make different things. They share some .as functions and common components, too. I understand that if I make a main project (for user login and to show a first panel) and two modules (customer, vendor) I must have all that components in my Eclipse project, isn't it? Instead of doing modules, should I create SWC for Vendor and other for Customer app and call from main app by using RSL? So, which option is more suitable? What do you advise me? Which are the trade-offs of each option? On the other side, this flex application is integrated with Java through Blaze and ibatis for persistence managment, and hold by a web apache server. I considered also to create independent war files to keep this indpendence, but I thought this do not optimize flex code. I'm right? Thank you. Nil

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >