Search Results

Search found 25148 results on 1006 pages for 'distributed source contr'.

Page 403/1006 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Unique identifiers for users

    - by Christopher McCann
    If I have a table of a hundred users normally I would just set up an auto-increment userID column as the primary key. But if suddenly we have a million users or 5 million users then that becomes really difficult because I would want to start becoming more distributed in which case an auto-increment primary key would be useless as each node would be creating the same primary keys. Is the solution to this to use natural primary keys? I am having a real hard time thinking of a natural primary key for this bunch of users. The problem is they are all young people so they do not have national insurance numbers or any other unique identifier I can think of. I could create a multi-column primary key but there is still a chance, however miniscule of duplicates occurring. Does anyone know of a solution? Thanks

    Read the article

  • Are licenses relevant for small code snippets?

    - by Martin
    When I'm about to write a short algorithm, I first check in the base class library I'm using whether the algorithm is implemented in it. If not, I often do a quick google search to see if someone has done it before (which is the case, 19 times out of 20). Most of the time, I find the exact code I need. Sometimes it's clear what license applies to the source code, sometimes not. It may be GPL, LGPL, BSD or whatever. Sometimes people have posted a code snippet on some random forum which solves my problem. It's clear to me that I can't reuse the code (copy/paste it into my code) without caring about the license if the code is in some way substantial. What is not clear to me is whether I can copy a code snippet containing 5 lines or so without doing a license violation. Can I copy/paste a 5-line code snippet without caring about the license? What about one-liner? What about 10 lines? Where do I draw the line (no pun intended)? My second problem is that if I have found a 10-line code snippet which does exactly what I need, but feel that I cannot copy it because it's GPL-licensed and my software isn't, I have already memorized how to implement it so when I go around implementing the same functionality, my code is almost identical to the GPL licensed code I saw a few minutes ago. (In other words, the code was copied to my brain and my brain after that copied it into my source code).

    Read the article

  • How do I load an XML document, add and remove nodes, then apply it to a ASP DataGrid control?

    - by JFOX
    I have a pretty simple operation but am struggling with how to implement it. I am loading XML from an external data source using a DataSet.ReadXml(), the creating a new XMLDataDocument from that data set, then syncing the Dataset back to the XMLDataDocument like so: doc = new XmlDataDocument(dsDataSet); dsDataSet.EnforceConstraints = false; dsDataSet= doc.DataSet; Once loaded I do two things to the XmlDataDocument: Loop through and check if a purely meta node, count, exists right beneath the root node and if so remove it. a thumb node exists in a second level nodelist and if not, create and append it. This is all going a expected because the result of doc.save() looks correct. Where I'm having an issue is updating the Dataset, which is being applied as the data source for an ASP DataGrid. Once all the above XMLDoc manipaulation is done I do this: dsDataSet.Merge(doc.DataSet); dsDataSet.AcceptChanges(); I then apply the data set to the grid control: dgList.DataSource = dsDataSet; dgList.DataBind(); But, when I do this I get this error on the site: System.Web.HttpException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'thumb'. What did I miss?

    Read the article

  • Flash CS4 compiler Error 1120 when embedding pngs into class instance variables.

    - by theolagendijk
    I have a Flash CS4 (Flash 9 ActionScript 3.0) project that compiles and runs perfectly on my machine. However it is part of a big batch of fla's that I want to compile on another (faster) machine. When I copy the project (the fla and all actionscripts and assets files) to the faster machine, it's Flash CS4 compiler gives me compiler error 1120 "Access of undefined property ButtonPause_PauseNormal". The property "PauseNormal" is an embedded png. The PNG is available. No transcoder errors. Here's the ActionScript for class "ButtonPause"; package nl.platipus.NissanESM.buttons { import flash.display.*; import flash.events.*; public class ButtonPause extends Sprite { [Embed(source="../../../../player/pause.png")] private var PauseNormal:Class; [Embed(source="../../../../player/pause_mo.png")] private var PauseMouseOver:Class; private var stateNormal:Bitmap; private var stateMouseOver:Bitmap; public function ButtonPause() { stateNormal = new PauseNormal(); stateNormal.width = 29; stateNormal.height = 14; stateNormal.alpha = 1; addChild(stateNormal); stateMouseOver = new PauseMouseOver(); stateMouseOver.width = 29; stateMouseOver.height = 14; stateMouseOver.alpha = 0; addChild(stateMouseOver); width = 29; height = 14; addEventListener(MouseEvent.MOUSE_OVER, handleMouseOver); addEventListener(MouseEvent.MOUSE_OUT, handleMouseOut ); } private function handleMouseOver(evt:MouseEvent):void { stateNormal.alpha = 0; stateMouseOver.alpha = 1; } private function handleMouseOut(evt:MouseEvent):void { stateNormal.alpha = 1; stateMouseOver.alpha = 0; } } } (Both machines run the exact same Flash CS4 Profesional Version 10.0.2 installation and both have the exact same publish settings and ActionScript 3.0 settings.) What's going on?

    Read the article

  • When do you tag your software project?

    - by WilhelmTell of Purple-Magenta
    I realize there are various kinds of software projects: commercial (for John Doe) industrial (for Mr. Montgomery Burns) successful open-source (with audience larger than, say, 10 people) personal projects (with audience size in the vicinity of 1). each of which release a new version of their product on difference conditions. I'm particularly interested in the case of personal projects and open-source projects. When, or under what conditions, do you make a new release of any kind? Do you subscribe to a fixed recurring deadline such as every two weeks? Do you commit to a release of at least 10 minor fixes, or one major fix? Do you combine the two conditions such as at least one condition must hold, or both must hold? I reckon this is a subjective question. I ask this question in light of searching for tricks to keep my projects alive and kicking. Sometimes my projects are active but look as if they aren't because I don't have the confidence to make a release or a tag of any sort for a long time -- in the order of months.

    Read the article

  • Custom basic authentication fails in IIS7

    - by manu08
    I have an ASP.NET MVC application, with some RESTful services that I'm trying to secure using custom basic authentication (they are authenticated against my own database). I have implemented this by writing an HTTPModule. I have one method attached to the HttpApplication.AuthenticateRequest event, which calls this method in the case of authentication failure: private static void RejectWith401(HttpApplication app) { app.Response.StatusCode = 401; app.Response.StatusDescription = "Access Denied"; app.CompleteRequest(); } This method is attached to the HttpApplication.EndRequest event: public void OnEndRequest(object source, EventArgs eventArgs) { var app = (HttpApplication) source; if (app.Response.StatusCode == 401) { string val = String.Format("Basic Realm=\"{0}\"", "MyCustomBasicAuthentication"); app.Response.AppendHeader("WWW-Authenticate", val); } } This code adds the "WWW-Authenticate" header which tells the browser to throw up the login dialog. This works perfectly when I debug locally using Visual Studio's web server. But it fails when I run it in IIS7. For IIS7 I have the built-in authentication modules all turned off, except anonymous. It still returns an HTTP 401 response, but it appears to be removing the WWW-Authenticate header. Any ideas?

    Read the article

  • Steps to Investigate Cause of Web.Config Duplicate Section

    - by pauly
    Symptoms In IIS Dot Net 2.0 Integrated app pool: double clicking to view any web.config section results in a the following error dialog. "There was an error while performing this operation.... Fielname... web.config... Error: There is a duplicate..." Browsing to the URL displays: "Http 500.19" internal server error.. There is a duplicate... 'system.web.extensions/scripting/scriptResourceHandler' section defined...." Running the app from VS 2008 an "Unable to start debugging on the web server..." dialog is displayed. Things Tried Looked at other application directories on same IIS server. No problem view web.config contents or serving up the app. Removed and re-added the application in IIS. Checked out a new version of the source code. Reverted to prior versions of the web.config file. Looked for web.config files that might have duplicate sections in: Inetpub root. "C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config" The "Views" subfolder of the ASP.Net MVC app. Checked out source code to another dev machine. Setup IIS 7 app folder. No problem with Web.config. Question If the reason for this error is another web.config file where else should I look? Are there other reasons for these symptoms?

    Read the article

  • Retrieving dll version info via Win32 - VerQueryValue(...) crashes under Win7 x64

    - by user256890
    The respected open source .NET wrapper implementation (SharpBITS) of Windows BITS services fails identifying the underlying BITS version under Win7 x64. Here is the source code that fails. NativeMethods are native Win32 calls wrapped by .NET methods and decorated via DllImport attribute. private static BitsVersion GetBitsVersion() { try { string fileName = Path.Combine( System.Environment.SystemDirectory, "qmgr.dll"); int handle = 0; int size = NativeMethods.GetFileVersionInfoSize(fileName, out handle); if (size == 0) return BitsVersion.Bits0_0; byte[] buffer = new byte[size]; if (!NativeMethods.GetFileVersionInfo(fileName, handle, size, buffer)) { return BitsVersion.Bits0_0; } IntPtr subBlock = IntPtr.Zero; uint len = 0; if (!NativeMethods.VerQueryValue(buffer, @"\VarFileInfo\Translation", out subBlock, out len)) { return BitsVersion.Bits0_0; } int block1 = Marshal.ReadInt16(subBlock); int block2 = Marshal.ReadInt16((IntPtr)((int)subBlock + 2 )); string spv = string.Format( @"\StringFileInfo\{0:X4}{1:X4}\ProductVersion", block1, block2); string versionInfo; if (!NativeMethods.VerQueryValue(buffer, spv, out versionInfo, out len)) { return BitsVersion.Bits0_0; } ... The implementation follows the MSDN instructions by the letter. Still during the second VerQueryValue(...) call, the application crashes and kills the debug session without hesitation. Just a little more debug info right before the crash: spv = "\StringFileInfo\040904B0\ProductVersion" buffer = byte[1900] - full with binary data block1 = 1033 block2 = 1200 I looked at the targeted "C:\Windows\System32\qmgr.dll" file (The implementation of BITS) via Windows. It says that the Product Version is 7.5.7600.16385. Instead of crashing, this value should return in the verionInfo string. Any advice?

    Read the article

  • First run notepad with my.cfg and only then start the service

    - by Viv Coco
    Hi all, I install along with my application: 1) a service that starts and stops my application as needed 2) a conf file that contains actually the user data and that will be shown to the user to modify as needed (I give the user the chance to change it by running notepad.exe with my conf file during installing) The problem is that in my code the service I install starts before the user had the chance to modify the conf file. What I would like is: 1) first the user gets the chance to change the conf file (run notepad.exe with the conf file) 2) only afterward start the service <Component Id="MyService.exe" Guid="GUID"> <File Id="MyService.exe" Source="MyService.exe" Name="MyService.exe" KeyPath="yes" Checksum="yes" /> <ServiceInstall Id='ServiceInstall' DisplayName='MyService' Name='MyService' ErrorControl='normal' Start='auto' Type='ownProcess' Vital='yes'/> <ServiceControl Id='ServiceControl' Name='MyService' Start='install' Stop='both' Remove='uninstall'/> </Component> <Component Id="my.conf" Guid="" NeverOverwrite="yes"> <File Id="my.cfg" Source="my.cfg_template" Name="my.cfg" KeyPath="yes" /> </Component> [...] <Property Id="NOTEPAD">Notepad.exe</Property> <CustomAction Id="LaunchConfFile" Property="NOTEPAD" ExeCommand="[INSTALLDIR]my.cfg" Return="ignore" Impersonate="no" Execute="deferred"/> <!--Run only on installs--> <InstallExecuteSequence> <Custom Action='LaunchConfFile' Before='InstallFinalize'>(NOT Installed) AND (NOT UPGRADINGPRODUCTCODE)</Custom> </InstallExecuteSequence> What am I doing wrong in the above code and how could I change it in order to achieve what I need? (first run notepad with my conf file and then start the service). TIA, Viv

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

  • If I select from an IQueryable then the Include is lost

    - by Connor Murphy
    The include does not work after I perform a select on the IQueryable query. Is there a way arround this? My query is public IQueryable<Network> GetAllNetworks() { var query = (from n in _db.NetworkSet .Include("NetworkContacts.Contact") .Include("NetworkContacts.Contact.RelationshipSource.Target") .Include("NetworkContacts.Contact.RelationshipSource.Source") select (n)); return query;; } I then try to populate mya ViewModel in my WebUI layer using the following code var projectedNetworks = from n in GetAllNetworks() select new NetworkViewModel { Name = n.Name, Contacts = from contact in networkList .SelectMany(nc => nc.NetworkContacts) .Where(nc => nc.Member == true) .Where(nc => nc.NetworkId == n.ID) .Select(c => c.Contact) select contact, }; return projectedNetworks; The problem now occurs in my newly createdNetworkViewModel The Contacts object does include any loaded data for RelationshipSource.Target or RelationshipSource.Source The data should is there when run from the original Repository IQueryable object. However the related include data does not seem to get transferred into the new Contacts collection that is created from this IQueryable using the Select New {} code above. Is there a way to preserve this Include data when it gets passed into a new object?

    Read the article

  • Castle Active Record - Working with the cache

    - by David
    Hi All, im new to the Castle Active Record Pattern and Im trying to get my head around how to effectivley use cache. So what im trying to do (or want to do) is when calling the GetAll, find out if I have called it before and check the cache, else load it, but I also want to pass a bool paramter that will force the cache to clear and requery the db. So Im just looking for the final bits. thanks public static List<Model.Resource> GetAll(bool forceReload) { List<Model.Resource> resources = new List<Model.Resource>(); //Request to force reload if (forceReload) { //need to specify to force a reload (how?) XmlConfigurationSource source = new XmlConfigurationSource("appconfig.xml"); ActiveRecordStarter.Initialize(source, typeof(Model.Resource)); resources = Model.Resource.FindAll().ToList(); } else { //Check the cache somehow and return the cache? } return resources; } public static List<Model.Resource> GetAll() { return GetAll(false); }

    Read the article

  • How do you handle files that can't support concurrent edits in Mercurial?

    - by Scott Whitlock
    I'm using Mercurial with TortoiseHg. Each developer has their own repositories, and there's one central repository on the server for synchronizing our changes. (This will sound lame, but we're using it to manage the source for a legacy VB6 project. Nothing we can do about that...) As has been pointed out elsewhere, there is a big problem in VB6 with merging the .frx (form resources) files. So code changes seem to merge fine, but if two developers both make changes at the same time in the form design view, we can't merge. I'm ok with disallowing concurrent edits, but of course the whole point of Mercurial is that it's distributed so there is no option to force a file to be locked before editing. I don't believe there's a Mercurial solution for this, so I'm wondering: other developers who are using Mercurial for version control, do you have some 3rd party tool that assists with locking files for editing in the cases where it's necessary? Did we make a mistake using Mercurial instead of something like SVN?

    Read the article

  • first app - wrong language is shown in appstore

    - by Sean
    hi all last week i distributed my first app to the appstore. what i've to see was, that the app language which is shown in appstore is not the right one. my app is just in german, but in appstore english is shown up. can somebody tell me what i've exactly got to do, that the language in the appstore is german? i know i ned a "de.lproj" folder, but i don't know what this folder should contain and what i've got to do step by step to realize that the right way. thanks in advance sean

    Read the article

  • Better way to call superclass method in ExtJS

    - by Rene Saarsoo
    All the ExtJS documentation and examples I have read suggest calling superclass methods like this: MyApp.MyPanel = Ext.extend(Ext.Panel, { initComponent: function() { // do something MyPanel specific here... MyApp.MyPanel.superclass.initComponent.call(this); } }); I have been using this pattern for quite some time and the main problem is, that when you rename your class then you also have to change all the calls to superclass methods. That's quite inconvenient, often I will forget and then I have to track down strange errors. But reading the source of Ext.extend() I discovered, that instead I could use the superclass() or super() methods that Ext.extend() adds to the prototype: MyApp.MyPanel = Ext.extend(Ext.Panel, { initComponent: function() { // do something MyPanel specific here... this.superclass().initComponent.call(this); } }); In this code renaming MyPanel to something else is simple - I just have to change the one line. But I have doubts... I haven't seen this documented anywhere and the old wisdom says, I shouldn't rely on undocumented behaviour. I didn't found a single use of these superclass() and supr() methods in ExtJS source code. Why create them when you aren't going to use them? Maybe these methods were used in some older version of ExtJS but are deprecated now? But it seems such a useful feature, why would you deprecate it? So, should I use these methods or not?

    Read the article

  • How to terminate a particular Azure worker role instance

    - by Oliver Bock
    Background I am trying to work out the best structure for an Azure application. Each of my worker roles will spin up multiple long-running jobs. Over time I can transfer jobs from one instance to another by switching them to a readonly mode on the source instance, spinning them up on the target instance, and then spinning the original down on the source instance. If I have too many jobs then I can tell Azure to spin up extra role instance, and use them for new jobs. Conversely if my load drops (e.g. during the night) then I can consolidate outstanding jobs to a few machines and tell Azure to give me fewer instances. The trouble is that (as I understand it) Azure provides no mechanism to allow me to decide which instance to stop. Thus I cannot know which servers to consolidate onto, and some of my jobs will die when their instance stops, causing delays for users while I restart those jobs on surviving instances. Idea 1: I decide which instance to stop, and return from its Run(). I then tell Azure to reduce my instance count by one, and hope it concludes that the broken instance is a good candidate. Has anyone tried anything like this? Idea 2: I predefine a whole bunch of different worker roles, with identical contents. I can individually stop and start them by switching their instance count from zero to one, and back again. I think this idea would work, but I don't like it because it seems to go against the natural Azure way of doing things, and because it involves me in a lot of extra bookkeeping to manage the extra worker roles. Idea 3: Live with it. Any better ideas?

    Read the article

  • Parallelism in Python

    - by fmark
    What are the options for achieving parallelism in Python? I want to perform a bunch of CPU bound calculations over some very large rasters, and would like to parallelise them. Coming from a C background, I am familiar with three approaches to parallelism: Message passing processes, possibly distributed across a cluster, e.g. MPI. Explicit shared memory parallelism, either using pthreads or fork(), pipe(), et. al Implicit shared memory parallelism, using OpenMP. Deciding on an approach to use is an exercise in trade-offs. In Python, what approaches are available and what are their characteristics? Is there a clusterable MPI clone? What are the preferred ways of achieving shared memory parallelism? I have heard reference to problems with the GIL, as well as references to tasklets. In short, what do I need to know about the different parallelization strategies in Python before choosing between them?

    Read the article

  • XSLT fails to load huge XML doc after matching certain elements

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • Django internationalization for admin pages - translate model name and attributes

    - by geekQ
    Django's internationalization is very nice (gettext based, LocaleMiddleware), but what is the proper way to translate the model name and the attributes for admin pages? I did not find anything about this in the documentation: http://docs.djangoproject.com/en/dev/topics/i18n/internationalization/ http://www.djangobook.com/en/2.0/chapter19/ I would like to have "???????? ????? ??? ?????????" instead of "???????? order ??? ?????????". Note, the 'order' is not translated. First, I defined a model, activated USE_I18N = True in settings.py, run django-admin makemessages -l ru. No entries are created by default for model names and attributes. Grepping in the Django source code I found: $ ack "Select %s to change" contrib/admin/views/main.py 70: self.title = (self.is_popup and ugettext('Select %s') % force_unicode(self.opts.verbose_name) or ugettext('Select %s to change') % force_unicode(self.opts.verbose_name)) So the verbose_name meta property seems to play some role here. Tried to use it: class Order(models.Model): subject = models.CharField(max_length=150) description = models.TextField() class Meta: verbose_name = _('order') Now the updated po file contains msgid 'order' that can be translated. So I put the translation in. Unfortunately running the admin pages show the same mix of "???????? order ??? ?????????". I'm currently using Django 1.1.1. Could somebody point me to the relevant documentation? Because google can not. ;-) In the mean time I'll dig deeper into the django source code...

    Read the article

  • Make MySQL database replication always use the most free node?

    - by Chad Johnson
    We started using Multi-Master Replication Manager for MySQL, and I am wondering whether it is possible to to treat this setup like multi-symmetric processing: a process pops off the process queue, and the node (in this case a server) that is most free is selected for the job. It seems that what happens is, the service switches to a slave ONLY when it mysqld crashes or goes away. Is there a way to make database replication for MySQL act in more of a distributed manner? Maybe there is other software besides MMM that can do this? Is there a way to switch the reader role to another server whene mysqld slows down (rather than just when it fails)?

    Read the article

  • Problem with DSL and Business Rules creation in Drools

    - by jillika iyer
    Hi, I am using Eclipse with the Drools plugin to create rules. I want to create business rules and main aim is to try and provide the user a set of options which he can use to create rules. For eg:If an Apple can have only 3 colors: I want to provide an option like a drop down so that the user can know before hand which are the options he can use in his rules. Is it possible? I am creating a dsl but unable to still provide the above functionality for a business rule. I am having an error implementing a basic dsl also. The code to add the dsl is as follows in my RuleRunner class() InputStream ruleSource = RuleRunner.class.getClassLoader().getResourceAsStream("/Rule1.dslr"); InputStream dslSource = RuleRunner.class.getClassLoader().getResourceAsStream("/sample-dsl.dsl"); //Load the rules , using DSL addRulesToThisPackage.addPackageFromDrl( new InputStreamReader(ruleSource),new InputStreamReader(dslSource)); I have both the sample-dsl .dsl and Rule1.dslr in my working directory. Error encountered at adding the dsl to the package (last line) Error stack: Exception in thread "main" java.lang.NullPointerException at java.io.Reader.<init>(Unknown Source) at java.io.InputStreamReader.<init>(Unknown Source) at com.org.RuleRunner.loadRuleFile(RuleRunner.java:96) at com.org.RuleRunner.loadRules(RuleRunner.java:48) at com.org.RuleRunner.runStatelessRules(RuleRunner.java:109) at com.org.RulesTest.main(RulesTest.java:41) my dsl file has basic mapping as per the online documentations. The dsl rule I created is: expander sample-dsl.dsl rule "A status changes B status" when There is an A - has an address There is a B - has name then - print updated A and Aaddress End I have created DSL in eclipse. Is the code I added for it to be loaded to my package correct?? Or am I missing something???? It seems like my program is unable to find the dsl? Please help. Can you point me towards the right direction to create a user friendly business rule ?? Thanks. J

    Read the article

  • DesignTime data not showing in Blend when bound against CollectionViewSource

    - by bitbonk
    I have datatemplate for a viewmodel where an itemscontrol is bound again a CollectionViewSource (to enable sorting in xaml). <DataTemplate x:Key="equipmentDataTemplate"> <Viewbox> <Viewbox.Resources> <CollectionViewSource x:Key="viewSource" Source="{Binding Modules}"> <CollectionViewSource.SortDescriptions> <scm:SortDescription PropertyName="ID" Direction="Ascending"/> </CollectionViewSource.SortDescriptions> </CollectionViewSource> </Viewbox.Resources> <ItemsControl ItemsSource="{Binding Source={StaticResource viewSource}}" Height="{DynamicResource equipmentHeight}" ItemTemplate="{StaticResource moduleDataTemplate}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <StackPanel Orientation="Horizontal" /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> </ItemsControl> </Viewbox> </DataTemplate> I have also setup the UserControl where all of this is defined to provide designtime data d:DataContext="{x:Static vm:DesignTimeHelper.Equipment}"> This is basically a static property that gives me an EquipmentViewModel that has a list of ModuleViewModels (Equipment.Modules). Now as long as I bind to the CollectionViewSource the designtime data does not show up in blend 3. When I bind to the ViewModel collection directly <ItemsControl ItemsSource="{Binding Modules}" I can see the designtime data. Any idea what I could do?

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • Issue in creating Zip file using glob.glob

    - by infosyssec
    Hi, I am creating a Zip file from a folder (and subfolders). it works fine and creates a new .zip file also but I am having an issue while using glob.glob. It is reading all files from the desired folder (source folder) and writing to the new zip file but the problem is that it is, however, adding subdirectories, but not adding files form the subdirectories. I am giving user an option to select the filename and path as well as filetype also (Zip or Tar). I don;t get any problem while creating .tar.gz file, but when use creates .zip file, this problem comes across. Here is my code: for name in (Source_Dir): for name in glob.glob("/path/to/source/dir/*" ): myZip.write(name, os.path.basename(name), zipfile.ZIP_DEFLATED) myZip.close() Also, if I use code below: for dirpath, dirnames, filenames in os.walk(Source_Dir): myZip.write(os.path.join(dirpath, filename) os.path.basename(filename)) myZip.close() Now the 2nd code taks all files even if it inside the folder/ subfolders, creates a new .zip file and write to it without any directory strucure. It even does not take dir structure for main folder and simply write all files from main dir or subdir to that .zip file. Can anyone please help me or suggest me. I would prefer glob.glob rather than the 2nd option to use. Thanks in advance. Regards, Akash

    Read the article

  • slow SQL command

    - by Retrocoder
    I need to take some data from one table (and expand some XML on the way) and put it in another table. As the source table can have thousands or records which caused a timeout I decided to do it in batches of 100 records. The code is run on a schedule so doing it in batches works ok for the customer. If I have say 200 records in the source database the sproc runs very fast but if there are thousands it takes several minutes. I'm guessing that the "TOP 100" only takes the top 100 after it has gone through all the records. I need to change the whole code and sproc at some point as it doesn't scale but for now is there a quick fix to make this run quicker ? INSERT INTO [deviceManager].[TransactionLogStores] SELECT TOP 100 [EventId], [message].value('(/interface/mac)[1]', 'nvarchar(100)') AS mac, [message].value('(/interface/device) [1]', 'nvarchar(100)') AS device_type, [message].value('(/interface/id) [1]', 'nvarchar(100)') AS device_id, [message].value('substring(string((/interface/id)[1]), 1, 6)', 'nvarchar(100)') AS store_id, [message].value('(/interface/terminal/unit)[1]', 'nvarchar(100)') AS unit, [message].value('(/interface/terminal/trans/event)[1]', 'nvarchar(100)') AS event_id, [message].value('(/interface/terminal/trans/data)[1]', 'nvarchar(100)') AS event_data, [message].value('substring(string((/interface/terminal/trans/data)[1]), 9, 11)', 'nvarchar(100)') AS badge, [message].value('(/interface/terminal/trans/time)[1]', 'nvarchar(100)') AS terminal_time, MessageRecievedAt_UTC AS db_time FROM [deviceManager].[TransactionLog] WHERE EventId > @EventId --WHERE MessageRecievedAt_UTC > @StartTime AND MessageRecievedAt_UTC < @EndTime ORDER BY terminal_time DESC

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >