Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 702/924 | < Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >

  • should jQuery data be chainable?

    - by pedalpete
    I'm trying to add multiple jQuery data entries to a single element. I suspected that the following would work jQuery('td.person#a'+personId).data('email',thisPerson.email).data('phone',thisPerson.phone); However, I am getting nothing but errors when I do this. jQuery('td.person#a'+personId).data('email',thisPerson.email); jQuery('td.person#a'+personId).data('phone',thisPerson.phone); is there another way to get more than one data entry on an element? Hopefully chained?

    Read the article

  • Data bind enum properties to grid and display description

    - by TrueWill
    This is a similar question to How to bind a custom Enum description to a DataGrid, but in my case I have multiple properties. public enum ExpectationResult { [Description("-")] NoExpectation, [Description("Passed")] Pass, [Description("FAILED")] Fail } public class TestResult { public string TestDescription { get; set; } public ExpectationResult RequiredExpectationResult { get; set; } public ExpectationResult NonRequiredExpectationResult { get; set; } } I'm binding a BindingList<TestResult> to a WinForms DataGridView (actually a DevExpress.XtraGrid.GridControl, but a generic solution would be more widely applicable). I want the descriptions to appear rather than the enum names. How can I accomplish this? (There are no constraints on the class/enum/attributes; I can change them at will.)

    Read the article

  • Java's ThreadPoolExecutor equivalent for C#?

    - by chillitom
    Hi Guys, I used to make good use of Java's ThreadPoolExecutor class and have yet to find a good equivalent in C#. I know of ThreadPool.QueueUserWorkItem which is useful in many cases but no good if you want to control the number of threads assigned to a task or have multiple individual queues for different task types. For example I liked to use a ThreadPoolExecutor with a single thread to guarantee sequential execution of asynchronous calls.. Is there an easy way to do this in C#? Is there a non-static thread pool implementation? Thanks, T.

    Read the article

  • Loading Fact Table + Lookup / UnionAll for SK lookups.

    - by Nev_Rahd
    I got to populate FactTable with 12 lookups to dimension table to get SK's, of which 6 are to different Dim Tables and rest 6 are lookup to same DimTable (type II) doing lookup to same natural key. Ex: PrimeObjectID = lookup to DimObject.ObjectID = get ObjectSK and got other columns which does same OtherObjectID1 = lookup to DimObject.ObjectID = get ObjectSK OtherObjectID2 = lookup to DimObject.ObjectID = get ObjectSK OtherObjectID3 = lookup to DimObject.ObjectID = get ObjectSK OtherObjectID4 = lookup to DimObject.ObjectID = get ObjectSK OtherObjectID5 = lookup to DimObject.ObjectID = get ObjectSK for such multiple lookup how should go in my SSIS package. for now am using lookup / unionall foreach lookup. Is there a better way to this.

    Read the article

  • Cookie questions in Java

    - by user220201
    Hi, How do I differentiate between multiple cookies set through my site? I am setting two kinds of cookies one to see if the user has visited the site or not and an other one for authentication. How do I differentiate between these two? I get both of them when someone accesses a page after authentication. Do I add extra information to the Cookie Value or is there some other way? I understand the setName() function will change the name (from jsessionid) for every cookie from then on. Am I correct? Pav

    Read the article

  • How to make g++ search for header files in a specific directory?

    - by Bane
    I have a project that is subdivided into a few directories with code in them. I'd like to to have g++ search for header files in the project's root directory, so I can avoid different include paths for same header files across multiple source files. Mainly, the root/ directory has sub-directories A/, B/ and C/, all of which have .hpp and .cpp files inside. If some source file in A wanted to include file.hpp, which was in B, it would have to do it like this: #include "../B/file.hpp". Same for another source file that was in C. But, if A itself had sub-directories with files that needed file.hpp, then, it would be inconsistent and would cause errors if I decided to move files (because the include path would be "../../B/file.hpp"). Also, this would need to work from other projects as well, which reside outside of root/. I already know that there is an option to manually copy all my header files into a default-search directory, but I'd like to do this the way I described.

    Read the article

  • Are there any nasty side affects if i lock the HttpContext.Current.Cache.Insert method

    - by Ekk
    Apart from blocking other threads reading from the cache what other problems should I be thinking about when locking the cache insert method for a public facing website. The actual data retrieval and insert into the cache should take no more than 1 second, which we can live with. More importantly i don't want multiple thread potentially all hitting the Insert method at the same time. The sample code looks something like: public static readonly object _syncRoot = new object(); if (HttpContext.Current.Cache["key"] == null) { lock (_syncRoot) { HttpContext.Current.Cache.Insert("key", "DATA", null, DateTime.Now.AddMinutes(5), Cache.NoSlidingExpiration, CacheItemPriority.Normal, null); } } Response.Write(HttpContext.Current.Cache["key"]);

    Read the article

  • Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

    - by Ted McLaughlan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} All too often in the growth and maturation of Enterprise Architecture initiatives, the effort stalls or is delayed due to lack of “applied traction”. By this, I mean the EA activities - whether targeted towards compliance, risk mitigation or value opportunity propositions – may not be attached to measurable, active, visible projects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum, without collaborative engagement and a means of proving usefulness. A critical vehicle to this proof is successful orchestration and use of assets and investment resources to meet a high-profile business objective – i.e. a successful project. More and more organizations are now exploring and considering some degree of IT outsourcing, buying and using external services and solutions to deliver their IT and business requirements – vs. building and operating in-house, in their own data centers. The rapid growth and success of “Cloud” services makes some decisions easier and some IT projects more successful, while dramatically lowering IT risks and enabling rapid growth. This is particularly true for “Software as a Service” (SaaS) applications, which essentially are complete web applications hosted and delivered over the Internet. Whether SaaS solutions – or any kind of cloud solution - are actually, ultimately the most cost-effective approach truly depends on the organization’s business and IT investment strategy. This leads us to Enterprise Architecture, the connectivity between business strategy and investment objectives, and the capabilities purchased or created to meet them. If an EA framework already exists, the approach to selecting a cloud-based solution and integrating it with internal IT systems (i.e. a “Hybrid IT” solution) is well-served by leveraging EA methods. If an EA framework doesn’t exist, or is simply not mature enough to address complex, integrated IT objectives – a hybrid IT/cloud initiative is the perfect project to advance and prove the value of EA. Why is this? For starters, the success of any complex IT integration project - spanning multiple systems, contracts and organizations, public and private – depends on active collaboration and coordination among the project stakeholders. For a hybrid IT initiative, inclusive of one or more cloud services providers, the IT services, business workflow and data governance challenges alone can be extremely complex, requiring many diverse layers of organizational expertise and authority. Establishing subject matter expertise, authorities and strategic guidance across all the disciplines involved in a hybrid-IT or hybrid-cloud system requires top-level, comprehensive experience and collaborative leadership. Tools and practices reflecting industry expertise and EA alignment can also be very helpful – such as Oracle’s “Cloud Candidate Selection Tool”. Using tools like this, and facilitating this critical collaboration by leading, organizing and coordinating the input and expertise into a shared, referenceable, reusable set of authority models and practices – this is where EA shines, and where Enterprise Architects can be most valuable. The “enterprise”, in this case, becomes something greater than the core organization – it includes internal systems, public cloud services, 3rd-party IT platforms and datacenters, distributed users and devices; a whole greater than the sum of its parts. Through facilitated project collaboration, leading to identification or creation of solid governance models and processes, a durable and useful Enterprise Architecture framework will usually emerge by itself, if not actually identified and managed as such. The transition from planning collaboration to actual coordination, where the program plan, schedule and resources become synchronized and aligned to other investments in the organization portfolio, is where EA methods and artifacts appear and become most useful. The actual scope and use of these artifacts, in the context of this project, can then set the stage for the most desirable, helpful and pragmatic form of the now-maturing EA framework and community of practice. Considering or starting a hybrid-IT or hybrid-cloud initiative? Running into some complex relationship challenges? This is the perfect time to take advantage of your new, growing or possibly latent Enterprise Architecture practice.

    Read the article

  • Graph diffing and versioning tool

    - by hashable
    I am working with a team that edits large DAGs represented as single files. Currently we are unable to work with multiple users concurrently modifying the DAG. Is there a tool (somewhat like the Eclipse SVN plugin) that can do do revision control on the file (manage timestamps/revision stamps) to identify incoming/outgoing/conflicting changes (Node/Link insertion/deletion/modification) and merge changes just like programmers do with source code files? The system should be able to do dependency management also. E.g. an incoming Link must not be accepted when one of the two Nodes is absent. That is, it should not "break" the existing DAG by allowing partial updates. If there is a framework to do this using generic "Node" and "Link" interfaces? Note: I am aware of Protege and its plugins. They currently do not satisfy my requirements.

    Read the article

  • How to configure AutoMapper if not ASP.net Application?

    - by alex
    I'm using AutoMapper in a number of projects within my solution. These projects may be deployed independantly, across multiple servers. In the documentation for AutoMapper it says: If you're using the static Mapper method, configuration only needs to happen once per AppDomain. That means the best place to put the configuration code is in application startup, such as the Global.asax file for ASP.NET applications. Whilst some of the projects will be ASP.net - most of these are class libraries / windows services. Where should I be configuring my mappings in this case?

    Read the article

  • How to display configurable product in each color in product listing?

    - by Thomas
    I have a configurable product which is available in many different colors and sizes. I want the configurable product to appear once for every color. My idea is to assign one simple product of the configurable product in every color to the category of the configurable product. Then I want to change the listing, so that the (colored) simple product links to it's master product (the configurable one). The other way would be, to just assign the configurable product to a category and then list it multiple times with different colors. But I think this would be to complicated.

    Read the article

  • SQL 2005 w/ C# optimal "Paging"

    - by David Murdoch
    When creating a record "grid" with custom paging what is the best/optimal way to query the total number of records as well as the records start-end using C#? SQL to return paged record set: SELECT Some, Columns, Here FROM ( SELECT ROW_NUMBER() OVER (ORDER BY Column ASC) AS RowId, * FROM Records WHERE (...) ) AS tbl WHERE ((RowId > @Offset) AND (RowId <= (@Offset + @PageSize)) ) SQL to count total number of records: SELECT COUNT(*) FROM Records WHERE (...) Right now, I make two trips to the server: one for getting the records, and the other for counting the total number of records. What is/are the best way(s) to combine these queries to avoid multiple DB trips?

    Read the article

  • Need help using ThemeRoller

    - by Music Magi
    Hi - I'm using DataTables to dress up a table I'm using to display XML results based on an XSL transformation. I have everything working from a technical sense (paging, sorting, filtering), but I'm trying to figure out to use a ThemeRoller theme to make it look like they have on their website. So far, I have added the following file to my project with its reference: <link type="text/css" href="css/custom-theme/jquery-ui-1.8.7.custom.css" rel="stylesheet"/> and enabled ThemeRoller themes using the following as per the DataTables website: $(document).ready(function() { $('#mainTable').dataTable( { "bJQueryUI": true, "sPaginationType": "two_button" }); }); The table gets styled, but it doesn't look right with the header rows being too wide and things being on multiple lines that should be on one line. Any indication as to what I'm doing wrong? Thanks very much in advance.

    Read the article

  • Parsing mutiple xml files in android

    - by cppdev
    Hi, I ma writing an application where I have to parse multiple xml files as a response from a server. Till now, I have written different xml parser classes for each xml depepending on the tags present in different xmls ? Can I combine all xml parser classes and write a single xml parser that handles all different tags in different xmls ? Will it work ? Will combining xml parser classes add an overhead because xml parser will check for every tag irrespective of whether that is part of file or not ? Please help.

    Read the article

  • Macro to copy data from one sheet to another based on the current date

    - by SgtSnafu
    Does anyone have a macro that copy data from one sheet to another based on the current date? I am working with a single workbook of three sheets. Sheet one will hold the manual input of daily production figures for multiple plants, sheet two is to hold ongoing daily data, keyed on sheet one. The macro will be associated with a button, so that once clicked it would search for every row that has a date of today, and copy that row to the next available blank row on sheet two. Sample Data... Plant 1 Input Date - $ Produced - Labor Hour 3-29-10 - 4538 - 8 3-30-10 - 7862 - 12 3-31-10 4-1-10 4-2-10 Plant 2 Input Date - $ Produced - Labor Hour 3-29-10 - 4545 - 9 3-30-10 - 7645 - 12 3-31-10 4-1-10 4-2-10

    Read the article

  • How to build a Windows Forms email-aware text box?

    - by Patrick Linskey
    Hi, I'm building a C# client app that allows a user to communicate with one or more existing users in a system via an email-like metaphor. I'd like to present the user with a text entry box that auto-completes on known email addresses, and allows multiple delimiter-separated addresses to be entered. Ideally, I'd also like the email addresses to turn into structured controls once they've been entered and recognized. Basically, I'm modeling the UI interaction for adding users after Facebook's model. Are there any Windows Forms controls out there with the ability to do something like this? Is there any well-established terminology for a hybrid textbox / control list box (no, not a ComboBox) or something that I should be searching for? Thanks, -Patrick

    Read the article

  • Tab versus space indentation in C#

    - by Lars Fastrup
    I sometimes find myself discussing this issue with other C# developers and especially if we use different styles. I can see the advantage of tab indentation allowing different developers to browse the code with their favorite indent size. Nonetheless, I long ago went for two space indentation in my C# code and have stuck with it ever since. Mainly because I often disliked the way statements spanning multiple lines are sometimes messed up when viewing code from other developers using another tab size. Recently a developer at one of my clients approached me and asked why I did not use tabs because he preferred to view code with an indentation size of 4. So my question is: Which style do you prefer and why?

    Read the article

  • SQL Server Command Timeouts Timeout expired. The timeout period elapsed prior to completion of the o

    - by user104295
    Hi there, we've had a ASP.NET web application deployed for a number of years now.... last week we migrated to a slightly slower server to save some money. Now we're frequently getting command timeouts: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding This is understandable in that a slower server will take longer to produce results; take longer time and produce a timeout. Is there any system-wide way to get SqlClient to set a longer timeout? We cannot change the code, as it's everywhere... we're using multiple data access technologies as well. Maybe there's a default command timeout setting on a connection string? We just need to increase it by 30 seconds; we're happy to wait a bit longer for queries to return. thanks

    Read the article

  • WMemoryProfiler is Released

    - by Alois Kraus
    What is it? WMemoryProfiler is a managed profiling Api to aid integration testing. This free library can get managed heap statistics and memory usage for your own process (remember testing) and other processes as well. The best thing is that it does work from .NET 2.0 up to .NET 4.5 in x86 and x64. To make it more interesting it can attach to any running .NET process. The reason why I do mention this is that commercial profilers do support this functionality only for their professional editions. An normally only since .NET 4.0 since the profiling API only since then does support attaching to a running process. This thing does differ in many aspects from “normal” profilers because while profiling yourself you can get all objects from all managed heaps back as an object array. If you ever wanted to change the state of an object which does only exist a method local in another thread you can get your hands on it now … Enough theory. Show me some code /// <summary> /// Show feature to not only get statisics out of a process but also the newly allocated /// instances since the last call to MarkCurrentObjects. /// GetNewObjects does return the newly allocated objects as object array /// </summary> static void InstanceTracking() { using (var dumper = new MemoryDumper()) // if you have problems use to see the debugger windows true,true)) { dumper.MarkCurrentObjects(); Allocate(); ILookup<Type, object> newObjects = dumper.GetNewObjects() .ToLookup( x => x.GetType() ); Console.WriteLine("New Strings:"); foreach (var newStr in newObjects[typeof(string)] ) { Console.WriteLine("Str: {0}", newStr); } } } … New Strings: Str: qqd Str: String data: Str: String data: 0 Str: String data: 1 … This is really hot stuff. Not only you can get heap statistics but you can directly examine the new objects and make queries upon them. When I do find more time I can reconstruct the object root graph from it from my own process. It this cool or what? You can also peek into the Finalization Queue to check if you did accidentally forget to dispose a whole bunch of objects … /// <summary> /// .NET 4.0 or above only. Get all finalizable objects which are ready for finalization and have no other object roots anymore. /// </summary> static void NotYetFinalizedObjects() { using (var dumper = new MemoryDumper()) { object[] finalizable = dumper.GetObjectsReadyForFinalization(); Console.WriteLine("Currently {0} objects of types {1} are ready for finalization. Consider disposing them before.", finalizable.Length, String.Join(",", finalizable.ToLookup( x=> x.GetType() ) .Select( x=> x.Key.Name)) ); } } How does it work? The W of WMemoryProfiler is a good hint. It does employ Windbg and SOS dll to do the heavy lifting and concentrates on an easy to use Api which does hide completely Windbg. If you do not want to see Windbg you will never see it. In my experience the most complex thing is actually to download Windbg from the Windows 8 Stanalone SDK. This is described in the Readme and the exception you are greeted with if it is missing in much greater detail. So I will not go into this here.   What Next? Depending on the feedback I do get I can imagine some features which might be useful as well Calculate first order GC Roots from the actual object graph Identify global statics in Types in object graph Support read out of finalization queue of .NET 2.0 as well. Support Memory Dump analysis (again a feature only supported by commercial profilers in their professional editions if it is supported at all) Deserialize objects from a memory dump into a live process back (this would need some more investigation but it is doable) The last item needs some explanation. Why on earth would you want to do that? The basic idea is to store in your live process some logging/tracing data which can become quite big but since it is never written to it is very fast to generate. When your process crashes with a memory dump you could transfer this data structure back into a live viewer which can then nicely display your program state at the point it did crash. This is an advanced trouble shooting technique I have not seen anywhere yet but it could be quite useful. You can have here a look at the current feature list of WMemoryProfiler with some examples.   How To Get Started? First I would download the released source package (it is tiny). And compile the complete project. Then you can compile the Example project (it has this name) and uncomment in the main method the scenario you want to check out. If you are greeted with an exception it is time to install the Windows 8 Standalone SDK which is described in great detail in the exception text. Thats it for the first round. I have seen something more limited in the Java world some years ago (now I cannot find the link anymore) but anyway. Now we have something much better.

    Read the article

  • Need some clarification with Patterns (DAO x Gateway)

    - by Marcos Placona
    Me and my colleagues got into this discussion early this morning, and our opinions started to clash a bit, so I decided to get some impartial advice here. One of my colleagues reckons that the DAO should return an object (populated bean). I think it's completely fine when you're returning a recordset with only one line, but think it's overkill if you have to return 10 lines, and create 10 separate objects. I on the other see that the difference between DAO and Gateway pattern is that the gateway pattern will allow you to return a recordset to your business class, which will therefore deal with the recordset data and do whatever it needs to do. My questions here are: Which assumptions are correct? What should the return type be for a DAO (i.e. getContact() - for one record) Should getContacts() (for multiple records) even be on the DAO, if so, what's it's returntype? We seem to be having some sort of confusion about DAO and Gateway Patterns. Should they be used together? Thanks in advance

    Read the article

  • Is there a list of language only character regions for UTF-8 somewhere?

    - by Brehtt
    I'm trying to analyze some UTF-8 encoded documents in a way that recognizes different language characters. For my approach to work I need to ignore non-language characters, such as control characters, mathematical symbols etc. Just trying to dissect the basic Latin section of the UTF standard has resulted in multiple regions, with characters like the division symbol being right in the middle of a range of valid Latin characters. Is there a list somewhere that identifies these regions? Or better yet, a Regex that defines the regions or something in C# that can identify the different characters?

    Read the article

  • C# Proxy, what is the best way to do this?

    - by Kin
    I'm writing a proxy using .NET and C#. It has a couple of functions that it needs to fulfill. I haven't done much Socket programming, and I am not sure the best way to go about it. Should I use Synchronous Sockets, Asynchronous sockets? Please help! It must... Accept Connections from the client on two different ports, and be able to receive data on both ports at the same time. When a connection is made on a port, it must immediately connect to the server, and start sending data as it receives it from the client to the server. Packets must be forwarded in the order they are received, exactly as they were received. It needs to be as low latency as possible. I don't need the ability for multiple clients to use the proxy, but it would be a nice feature if its easy to implement.

    Read the article

  • Doing a join across two databases with different collations on SQL Server and getting an error.

    - by Andrew G. Johnson
    I know, I know with what I wrote in the question I shouldn't be surprised. But my situation is slowly working on an inherited POS system and my predecessor apparently wasn't aware of JOINs so when I looked into one of the internal pages that loads for 60 seconds I see that it's a fairly quick, rewrite these 8 queries as one query with JOINs situation. Problem is that besides not knowing about JOINs he also seems to have had a fetish for multiple databases and surprise, surprise they use different collations. Fact of the matter is we use all "normal" latin characters that English speaking people would consider the entire alphabet and this whole thing will be out of use in a few months so a bandaid is all I need. Long story short is I need some kind of method to cast to a single collation so I can compare two fields from two databases. Exact error is: Cannot resolve the collation conflict between "SQL_Latin1_General_CP850_CI_AI" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.

    Read the article

  • Maximum Possible File Name Length in Windows Kernel

    - by Lambert
    I was wondering, what is the longest possible name length allowed by the Windows kernel? E.g.: I know the kernel uses UNICODE_STRING structures to hold all object paths, and since the byte length of a wide-character string is stored inside a USHORT, that allows for a maximum path length of 2^15 - 1 characters. Is there a similar, hard restriction on a file name (rather than path)? (I don't care if NTFS or FAT32 imposes a particular restriction; I'm looking for the longest possible theoretically allowed name in the kernel, assuming no additional file system or shell restrictions.) (Edit: For those wondering why this even matters, consider that normally, traversing a directory is achieved by FindFirstFile/FindNextFile calls, one call per file. Given the function named NtQueryDirectoryFile, which is the underlying system call and which returns multiple file names per call, it's actually possible to take advantage of this maximum-length restriction on the path to make an extremely-fast directory traverser that uses solely the stack as a buffer. Now I'm trying to extend that concept, and I need to know the maximum size of a file name.)

    Read the article

  • Cocoa accessibility API, can I click a window in the background without activating it?

    - by Winawer
    I've been searching forever for a solution to this, so I thought I'd seek out the brainpower of greater minds than mine. I'm developing a Cocoa app that uses the Accessibility API to manipulate another program (it's a hotkey app). The app I'm controlling typically has multiple windows open, with some hidden behind others. What I would like to do, if it's possible, is to send mouse events to windows using the Accessibility API in a way that presses a button in the window without bringing it to the foreground (interact with the window but don't activate it). The reason I'm trying to do this is that sending the mouse event to this other window will force it to the foreground and disrupt the user's interaction with the foremost window. This is possible on Windows - apparently, because apps similar to mine do it there - but I'm getting the feeling that this isn't possible with Cocoa, given the way the window manager works. Am I mistaken?

    Read the article

< Previous Page | 698 699 700 701 702 703 704 705 706 707 708 709  | Next Page >