Search Results

Search found 29508 results on 1181 pages for 'object initializers'.

Page 1002/1181 | < Previous Page | 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009  | Next Page >

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • copy a text file in C#

    - by melt
    I am trying to copy a text file in an other text file line by line. It seems that there is a buffer of 1024 character. If there is less than 1024 character in my file, my function will not copie in the other file. Also if there is more than 1024 character but less a factor of 1024, these exceeding characters will not be copied. Ex: 2048 character in initial file - 2048 copied 988 character in initial file - 0 copied 1256 character in initial file - 1024 copied Thks! private void button3_Click(object sender, EventArgs e) { // écrire code pour reprendre le nom du fichier sélectionné et //ajouter un suffix "_poly.txt" string ma_ligne; const int RMV_CARCT = 9; //délcaration des fichier FileStream apt_file = new FileStream(textBox1.Text, FileMode.Open, FileAccess.Read); textBox1.Text = textBox1.Text.Replace(".txt", "_mod.txt"); FileStream mdi_file = new FileStream(textBox1.Text, FileMode.OpenOrCreate,FileAccess.ReadWrite); //lecture/ecriture des fichiers en question StreamReader apt = new StreamReader(apt_file); StreamWriter mdi_line = new StreamWriter(mdi_file, System.Text.Encoding.UTF8, 16); while (apt.Peek() >= 0) { ma_ligne = apt.ReadLine(); //if (ma_ligne.StartsWith("GOTO")) //{ // ma_ligne = ma_ligne.Remove(0, RMV_CARCT); // ma_ligne = ma_ligne.Replace(" ",""); // ma_ligne = ma_ligne.Replace(",", " "); mdi_line.WriteLine(ma_ligne); //} } apt_file.Close(); mdi_file.Close(); }

    Read the article

  • Java Applet not caching

    - by John Fogerty
    Hello world, I'm having a problem with a Java applet I've deployed that is refusing to be cached in the jvm's "sticky" cache (or by the browser). For some reason every time a user loads the page this applet is on, the jvm re-downloads the jar file from the server which causes a long delay. The webpage containing the applet is being accessed via the internet, so according to Sun's Java applet documentation I'm using an <applet> tag rather than an <object> or <embed> tag. Any help debugging or identifying the problem would be much appreciated. Below is the full applet tag I'm using: <applet alt="Scanning Applet failed to load" archive="scanning.jar" code="scanning.scanlet.class" codebase="/java/" codetype="application/java" height="30" mayscript="True" name="scanlet" width="200"> <param name="domain" value="192.168.12.23" /> <param name="publishName" value="scan_attachment" /> <param name="publishURL" value="http://192.168.12.23/draft/update/52" /> <param name="curURL" value="http://192.168.12.23/draft/edit/52" /> Your browser is unable to process the Java &lt;APPLET&gt; tag needed to display this applet <br /> One solution would be to download a better web browser like <a href="http://www.mozilla.com/firefox">Mozilla's Firefox</a> </applet>

    Read the article

  • Not able to close Dialog from parent dialog reference

    - by coffeeaddict
    In the dialog content, I have this: <script type="text/javascript"> $(function() { $("#cancel").click(function(o) { alert(parent.$('#dialog')); parent.$("#dialog").dialog("close"); o.preventDefault(); }); }); </script> and also tried this: <script type="text/javascript"> $(function() { $("#cancel").click(function(o) { $(this).closest("#dialog").dialog("close"); o.preventDefault(); }); }); </script> <a id="cancel" href="#"><img src="images/cancelBtn.gif" /></a> This code again is the content of my dialog. When the user clicks the hyperlink I'm trying to close the parent dialog (the dialog that this content is in) but it's not doing anything. I know that the alert gets hit and that there is an object for parent.$('#dialog'); so not sure why it won't close the parent dialog

    Read the article

  • JSON Serialization of a Django inherited model

    - by Simon Morris
    Hello, I have the following Django models class ConfigurationItem(models.Model): path = models.CharField('Path', max_length=1024) name = models.CharField('Name', max_length=1024, blank=True) description = models.CharField('Description', max_length=1024, blank=True) active = models.BooleanField('Active', default=True) is_leaf = models.BooleanField('Is a Leaf item', default=True) class Location(ConfigurationItem): address = models.CharField(max_length=1024, blank=True) phoneNumber = models.CharField(max_length=255, blank=True) url = models.URLField(blank=True) read_acl = models.ManyToManyField(Group, default=None) write_acl = models.ManyToManyField(Group, default=None) alert_group= models.EmailField(blank=True) The full model file is here if it helps. You can see that Company is a child class of ConfigurationItem. I'm trying to use JSON serialization using either the django.core.serializers.serializer or the WadofStuff serializer. Both serializers give me the same problem... >>> from cmdb.models import * >>> from django.core import serializers >>> serializers.serialize('json', [ ConfigurationItem.objects.get(id=7)]) '[{"pk": 7, "model": "cmdb.configurationitem", "fields": {"is_leaf": true, "extension_attribute_10": "", "name": "", "date_modified": "2010-05-19 14:42:53", "extension_attribute_11": false, "extension_attribute_5": "", "extension_attribute_2": "", "extension_attribute_3": "", "extension_attribute_1": "", "extension_attribute_6": "", "extension_attribute_7": "", "extension_attribute_4": "", "date_created": "2010-05-19 14:42:53", "active": true, "path": "/Locations/London", "extension_attribute_8": "", "extension_attribute_9": "", "description": ""}}]' >>> serializers.serialize('json', [ Location.objects.get(id=7)]) '[{"pk": 7, "model": "cmdb.location", "fields": {"write_acl": [], "url": "", "phoneNumber": "", "address": "", "read_acl": [], "alert_group": ""}}]' >>> The problem is that serializing the Company model only gives me the fields directly associated with that model, not the fields from it's parent object. Is there a way of altering this behaviour or should I be looking at building a dictionary of objects and using simplejson to format the output? Thanks in advance ~sm

    Read the article

  • How to loop video using NetStream in Data Generation Mode

    - by WesleyJohnson
    I'm using a NetStream in Data Generation Mode to play an embeded FLV using appendBytes. When the stream is finished playing, I'd like to loop the FLV file. I'm not sure how to achieve this. Here is what I have so far (this isn't a complete example): public function createBorderAnimation():void { // Load the skin image borderAnimation = Assets.BorderAnimation; // Convert the animation to a byte array borderAnimationBytes = new borderAnimation(); // Initialize the net connection border_nc = new NetConnection(); border_nc.connect( null ); // Initialize the net stream border_ns = new NetStream( border_nc ); border_ns.client = { onMetaData:function( obj:Object ):void{ trace(obj); } } border_ns.addEventListener( NetStatusEvent.NET_STATUS, border_netStatusHandler ); border_ns.play( null ); border_ns.appendBytes( borderAnimationBytes ); // Initialize the animation border_vd = new Video( 1024, 768 ); border_vd.attachNetStream( border_ns ); // Add the animation to the stage ui = new UIComponent(); ui.addChild( DisplayObject( border_vd ) ); grpBackground.addElement( ui ); } protected function border_netStatusHandler( event:NetStatusEvent ):void { if( event.info.code == "NetStream.Buffer.Flush" || event.info.code == "NetStream.Buffer.Empty" ) { border_ns.appendBytesAction( NetStreamAppendBytesAction.RESET_BEGIN ); border_ns.appendBytes( borderAnimationBytes ); border_ns.appendBytesAction( NetStreamAppendBytesAction.END_SEQUENCE ); } } This will loop the animation, but it starts chewing up memory like crazy. I've tried using NetStream.seek(0) and NetStream.appendBytesAction( NetStreamAppendBytesAction.RESET_SEEK ), but then I'm not sure what to do next. If you just try to call appendBytes again after that, it doesn't work, presumably because I'm appending the full byte array which has the FLV header and stuff? I'm not very familiar with how that all works. Any help is greatly appreciated.

    Read the article

  • printf and Console.WriteLine - problems with Console.SetOut?

    - by Matt Jacobsen
    i have a bunch of Console.WriteLines in my code that I can observe at runtime. I communicate with a native library that I also wrote. I'd like to stick some printf's in the native library and observe them too. I don't see them at runtime however. I've created a convoluted hello world app to demonstrate my problem. When the app runs, I can debug into the native library and see that the hello world is called. The output never lands in the textwriter though. Note that if the same code is run as a console app then everything works fine. C#: [DllImport("native.dll")] static extern void Test(); StreamWriter writer; public Form1() { InitializeComponent(); writer = new StreamWriter(@"c:\output.txt"); writer.AutoFlush = true; System.Console.SetOut(writer); } private void button1_Click(object sender, EventArgs e) { Test(); } and the native part: __declspec(dllexport) void Test() { printf("Hello World"); } Despite my earlier ramblings (see edits) I actually think this is a problem in C# (or rather my understanding of it).

    Read the article

  • How to avoid an asp:Multiview to load all the views

    - by GigaPr
    Hi I have a multiview control on my page and a menu to create a tab control <asp:Menu ID="tabMenu" Orientation="Horizontal" StaticMenuItemStyle-CssClass="tab" StaticSelectedStyle-CssClass="selectedTab" CssClass="tabs" OnMenuItemClick="Menu1_MenuItemClick" runat="server"> </asp:Menu><asp:MultiView ID="multiViewTab" ActiveViewIndex="0" runat="server"> <asp:View ID="viewDetails" runat="server"> <uc:ViewDetails runat="server" ID="ucViewDetails" /> </asp:View> <asp:View ID="viewJobs" runat="server"> <uc:ViewJob ID="ucViewJob" runat="server" /> </asp:View> <asp:View ID="viewJobList" runat="server"> <uc:ViewJobList ID="ucViewJobList" runat="server" /> </asp:View> </asp:MultiView> and i set the current view as follow protected void Menu1_MenuItemClick(object sender, MenuEventArgs e) { int index = Int32.Parse(e.Item.Value); if (index != 0) { tabMenu.FindItem("0").Selected = false; } multiViewTab.ActiveViewIndex = index; } it works well ... the only problem is that each time i click on a view all the views are loaded.while i would like to load only the one which is active Do you know any way to avoid the loading of all the views?

    Read the article

  • Flexible design - customizable entity model, UI and workflow

    - by Ngm
    Hi All, I want to achieve the following aspects in the software I am building: 1. Customizable entity model 2. Customizable UI 3. Customizable workflow I have thought about an approach to achieve this, I want you to review this and make suggestions: Entity objects should be plain objects and will hold just data Separate Entity model and DB Schema by using an framework (like NHibernate?). This will allow easy modification of entity objects. Business logic to fetch/modify entities has to be granular enough so that they can be invoked as part of the workflow. Business objects should not hold any state, and hence will contain only static methods The workflow will decide depending upon the "state" of an entity/entities which methods on business object/objects to invoke. The workflow should obtain the results of the processing and then pass on the business objects to the appropriate UI screen. The UI screen has to contain instructions about how to display a given entity/entites. Possibly the UI has to be generated dynamically based on a set of UI instructions. (like XUL) What do you think about this approach? Suggest which existing frameworks (like NHiberante, Window Workflow) fit into this model, so that I will not spend time on coding these frameworks Also suggest is there any asp.net framework that can generate dynamic asp.net ajax pages based on a set of UI instructions (like Mozilla XUL)? I have recently been exploring Apache Ofbiz and was impressed by its ability to customize most areas of the application: UI, workflow, entities. Is there any similar (not necessarily an ERP system) application developed in C#/.Net which offers a similar level of customization? I am looking for examples of applications developed in C# that are highly customizable in terms of UI, Workflow and Entity Model

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • How to stay DRY when using both Javascript and ERB templates (Rails)

    - by user94154
    I'm building a Rails app that uses Pusher to use web sockets to push updates to directly to the client. In javascript: channel.bind('tweet-create', function(tweet){ //when a tweet is created, execute the following code: $('#timeline').append("<div class='tweet'><div class='tweeter'>"+tweet.username+"</div>"+tweet.status+"</div>"); }); This is nasty mixing of code and presentation. So the natural solution would be to use a javascript template. Perhaps eco or mustache: //store this somewhere convenient, perhaps in the view folder: tweet_view = "<div class='tweet'><div class='tweeter'>{{tweet.username}}</div>{{tweet.status}}</div>" channel.bind('tweet-create', function(tweet){ //when a tweet is created, execute the following code: $('#timeline').append(Mustache.to_html(tweet_view, tweet)); //much cleaner }); This is good and all, except, I'm repeating myself. The mustache template is 99% identical to the ERB templates I already have written to render HTML from the server. The intended output/purpose of the mustache and ERB templates are 100% the same: to turn a tweet object into tweet html. What is the best way to eliminate this repetition? UPDATE: Even though I answered my own question, I really want to see other ideas/solutions from other people--hence the bounty!

    Read the article

  • WMI Query Script as a Job

    - by Kenneth
    I have two scripts. One calls the other with a list of servers as parameters. The second query is designed to execute a WMI query. When I run it manually, it does this perfectly. When I try to run it as a job it hangs forever and I have to remove it. For the sake of space here is the relevant part of the calling script: ProcessServers.ps1 Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList $sqlsrv,$destdb,$server,$instance GetServerDetailsLight.ps1 param($sqlsrv,$destdb,$server,$instance) $password = get-content C:\SQLPS\auth.txt | convertto-securestring $credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist "DOMAIN\MYUSER",$password [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $box_id = 0; if ($sqlsrv.length -eq 0) { write-output "No data passed" break } function getinfo { param( [string]$svr, [string]$inst ) "Entered GetInfo with: $svr,$inst" $cs = get-wmiobject win32_operatingsystem -computername $svr -credential $credentials -authentication 6 -Verbose -Debug | select Name, Model, Manufacturer, Description, DNSHostName, Domain, DomainRole, PartOfDomain, NumberOfProcessors, SystemType, TotalPhysicalMemory, UserName, Workgroup write-output "WMI Results: $cs" } getinfo $server $instance write-output "Complete" Executed as a job it will show as 'running' forever: PS C:\sqlps> Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList DBSERVER,LOGDB,SERVER01,SERVER01 Id Name State HasMoreData Location Command -- ---- ----- ----------- -------- ------- 21 Job21 Running True localhost param($sqlsrv,$destdb,... GAC Version Location --- ------- -------- True v2.0.50727 C:\WINDOWS\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll getinfo MSDCHR01 MSDCHR01 Entered GetInfo with: SERVER01,SERVER01 The last output I ever get is the 'Entered GetInfo with: SERVER01,SERVER01'. If I run it manually like so: PS C:\sqlps> .\GetServerDetailsLight.ps1 DBSERVER LOGDB SERVER01 SERVER01 The WMI query executes just as expected. I am trying to determine why this is, or at least a useful way to trap errors from within jobs. Thanks!

    Read the article

  • .NET - getting form field key/value pairs?

    - by AverageJoe719
    Hi there, I've got a form with textboxes and I want to run some server side validation code on the values after the form is submitted. I was planning to grab all the textbox controls on the page and adding them to a list, then running a for each loop that says for each control in the list query the database where fieldValidation.Name = Control.Name. This would return to me an object and an associated function where i coudl then input the Control.Value and actually perform the validation. My friend told me building the list is not necessary because all languages have a way to get key/value pairs from forms (he doesn't know .NET so coudln't help me). I may be searching the wrong term here but I have not been able to find a result, or maybe I misunderstood my friend. Is there some kind of dictionary or key/value pair automatically generated upon form submissions that contains the value that was submitted and...I guess also the control? Or am I just misunderstanding him. If he was just saying populate a key/value pair based on the form submissions, how does that help me in this situation over a list that contains the control? Thanks =)

    Read the article

  • Mocking attributes - C#

    - by bob
    I use custom Attributes in a project and I would like to integrate them in my unit-tests. Now I use Rhino Mocks to create my mocks but I don't see a way to add my attributes (and there parameters) to them. Did I miss something, or is it not possible? Other mocking framework? Or do I have to create dummy implementations with my attributes? example: I have an interface in a plugin-architecture (IPlugin) and there is an attribute to add meta info to a property. Then I look for properties with this attribute in the plugin implementation for extra processing (storing its value, mark as gui read-only...) Now when I create a mock can I add easily an attribute to a property or the object instance itself? EDIT: I found a post with the same question - link. The answer there is not 100% and it is Java... EDIT 2: It can be done... searched some more (on SO) and found 2 related questions (+ answers) here and here Now, is this already implemented in one or another mocking framework?

    Read the article

  • How to make this into a self contained jQuery plugin? Works inline.

    - by Jannis
    Hi, I have been trying to make this to be a little jQuery plugin that I can reuse in the future without having to write the following into my actions.js file in full. This works when loaded in the same file where I set the height using my variable tallest. var tallest = null; $('.slideshow img').each(function(index) { if ($(this).height() >= tallest ) { tallest = $(this).height(); } }); $('.slideshow').height(tallest); This works and will cycle through all the items, then set the value of tallest to the greatest height found. The following however does not work: This would be the plugin, loaded from its own file (before the actions.js file that contains the parts using this): (function($){ $.fn.extend({ tallest: function() { var tallest = null; return this.each(function() { if ($(this).height() >= tallest ) { tallest = $(this).height(); } }); } }); })(jQuery); Once loaded I am trying to use it as follows: $('.slideshow img').tallest(); $('.slideshow').height(tallest); However the above 2 lines return an error of 'tallest is undefined'. How can I make this work? Any ideas would be appreciated. Thinking about this even more the perfect usage of this would be as follows: $('.container').height(tallest('.container item')); But I wouldn't even know where to begin to get this to work in the manner that you pass the object to be measured into the function by adding it into the brackets of the function name.. Thanks for reading, Jannis

    Read the article

  • Json HTTP Module stream issue

    - by Justin
    Hey, I have an HTTP Module that I use to clean up the JSON returned by my web service (see http://www.codeproject.com/KB/webservices/ASPNET_JSONP.aspx?msg=3400287#xx3400287xx for an example of this.) Basically it relates to calling cross-domain JSON web services from javascript. There is this JsonHttpModule which uses a JsonResponseFilter Stream class to write out the JSON and the overloaded Write method is supposed to wrap the name of the callback function around the JSON, otherwise the JSON errors out as needing a label. However, if the JSON is really long, the Write method in the Stream class is called multiple times, causing the callback function to incorrectly get inserted midway through the JSON. Is there a way in the Stream class to wrap the callback function around the stream at the end or to specify that it write all of the JSON in 1 Write method instead of in chunks?? Here's where it calls the JsonResponseFilter in the JsonHttpModule: public void OnReleaseRequestState(object sender, EventArgs e) { HttpApplication app = (HttpApplication)sender; if (!_Apply(app.Context.Request)) return; // apply response filter to conform to JSONP app.Context.Response.Filter = new JsonResponseFilter(app.Context.Response.Filter, app.Context); } Here's the Write method in the JsonResponseFilter Stream class that gets called multiple times: public override void Write(byte[] buffer, int offset, int count) { var b1 = Encoding.UTF8.GetBytes(_context.Request.Params["callback"] + "("); _responseStream.Write(b1, 0, b1.Length); _responseStream.Write(buffer, offset, count); var b2 = Encoding.UTF8.GetBytes(");"); _responseStream.Write(b2, 0, b2.Length); } Thanks for any help! Justin

    Read the article

  • How to find the leaky faucet that loads into Malloc 32kb

    - by Rob
    I have been messing around with Leaks trying to find which function is not being deallocated (I am still new to this) and could really use some experienced insight. I have this bit of code that seems to be the culprit. Every time I press the button that calls this code, 32kb of memory is additionally allocated to memory and when the button is released that memory does not get deallocated. What I found was that everytime that AVAudioPlayer is called to play an m4a file, the final function to parse the m4a file is MP4BoxParser::Initialize() and this in turn allocates 32kb of memory through Cached_DataSource::ReadBytes My question is, how do I go about deallocating that after it is finished so that it doesn't keep allocating 32kb every time the button is pressed? Any help you could provide is greatly appreciated! - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { //stop playing theAudio.stop; // cancel any pending handleSingleTap messages [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(handleSingleTap) object:nil]; UITouch* touch = [[event allTouches] anyObject]; NSString* filename = [g_AppsList objectAtIndex: [touch view].tag]; NSString *path = [[NSBundle mainBundle] pathForResource: filename ofType:@"m4a"]; theAudio=[[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path] error:NULL]; theAudio.delegate = self; [theAudio prepareToPlay]; [theAudio setNumberOfLoops:-1]; [theAudio setVolume: g_Volume]; [theAudio play]; }

    Read the article

  • Serializing MDI Winforms for persistency

    - by Serge
    Hello, basically my project is an MDI Winform application where a user can customize the interface by adding various controls and changing the layout. I would like to be able to save the state of the application for each user. I have done quite a bit of searching and found these: http://stackoverflow.com/questions/2076259/how-to-auto-save-and-auto-load-all-properties-in-winforms-c http://stackoverflow.com/questions/1669522/c-save-winform-or-controls-to-file Basically from what I understand, the best approach is to serialize the data to XML, however winform controls are not serializable, so I would have use surrogate classes: http://www.codeproject.com/KB/dotnet/Surrogate_Serialization.aspx Now, do I need to write a surrogate class for each of my controls? I would need to write some sort of a recursive algorithm to save all my controls, what is the best approach to do accomplish that? How would I then restore all the windows, should I use the memento design pattern for that? If I want to implement multiple users later, should I use Nhibernate to store all the object data in a database? I am still trying to wrap my head around the problem and if anyone has any experience or advice I would greatly appreciate it, thanks.

    Read the article

  • MyMessage<T> throws an exception when calling XmlSerializer

    - by Arthis
    I am very new to nservicebus. I am using version 3.0.1, the last one up to date. And I wonder if my case is a normal limitation of NSB, I am not aware of. I have an asp.net MVC application, I am trying to setup and in my global.asax, I have the following : var configure = Configure.WithWeb() .DefaultBuilder() .ForMvc() .XmlSerializer(); But I have an error with the XmlSerializer when dealing with one of my object: [Serializable] public class MyMessage<T> : IMessage { public T myobject { get; set; } } I pass trough : XmlSerializer() instance.Initialize(types); this.InitType(type, moduleBuilder); this.InitType(info2.PropertyType, moduleBuilder); and then when dealing With T, string typeName = GetTypeName(t); typename is null and the following instruction : if (!nameToType.ContainsKey(typeName)) ends in error. null value not allowed. Is this some limitations to Nservicebus, or am I messing something up?

    Read the article

  • Duplicate Items Using Join in NHibernate Map

    - by Colin Bowern
    I am trying to retrieve the individual detail rows without having to create an object for the parent. I have a map which joins a parent table with the detail to achieve this: Table("UdfTemplate"); Id(x => x.Id, "Template_Id"); Map(x => x.FieldCode, "Field_Code"); Map(x => x.ClientId, "Client_Id"); Join("UdfFields", join => { join.KeyColumn("Template_Id"); join.Map(x => x.Name, "COLUMN_NAME"); join.Map(x => x.Label, "DISPLAY_NAME"); join.Map(x => x.IsRequired, "MANDATORY_FLAG") .CustomType<YesNoType>(); join.Map(x => x.MaxLength, "DATA_LENGTH"); join.Map(x => x.Scale, "DATA_SCALE"); join.Map(x => x.Precision, "DATA_PRECISION"); join.Map(x => x.MinValue, "MIN_VALUE"); join.Map(x => x.MaxValue, "MAX_VALUE"); }); When I run the query in NH using: Session.CreateCriteria(typeof(UserDefinedField)) .Add(Restrictions.Eq("FieldCode", code)).List<UserDefinedField>(); I get back the first row three times as opposed to the three individual rows it should return. Looking at the SQL trace in NH Profiler the query appears to be correct. The problem feels like it is in the mapping but I am unsure how to troubleshoot that process. I am about to turn on logging to see what I can find but I thought I would post here in case someone with experience mapping joins knows where I am going wrong.

    Read the article

  • Objects in JavaScript defined and undefined at the same time (in a FireFox extension)

    - by Alexey Romanov
    I am chasing down a bug in a FireFox extension. I've finally managed to see it for myself (I've only had reports before) and I can't understand how what I saw is possible. One error message from my extension in the Error Console is "gBrowser is not defined". This by itself would be surprising enough, since the overlay is over browser.xul and navigator.xul, and I expect gBrowser to be available from both. Even worse is the actual place where it happens: line 101 of nextplease.js. That is, inside the function isTopLevelDocument, which is only called from onContentLoaded, which is only called from onLoad here: gBrowser.addEventListener(this.loadType, function (event) { nextplease.loadListener.onContentLoaded(event); }, true); So gBrowser is defined in onLoad, but somehow undefined in isTopLevelDocument. When I tried to actually use the extension, I got another error: "nextplease is not defined". The interesting thing is that it happened on lines 853 and 857. That is, inside the functions nextplease.getNextLink = function () { nextplease.getLink(window.content, nextplease.NextPhrasesMap, nextplease.NextImagesMap, nextplease.isNextRegExp, nextplease.NEXT_SEARCH_TYPE); } nextplease.getPrevLink = function () { nextplease.getLink(window.content, nextplease.PrevPhrasesMap, nextplease.PrevImagesMap, nextplease.isPrevRegExp, nextplease.PREV_SEARCH_TYPE); } So nextplease is somehow defined enough to call these functions, but isn't defined inside them. Finally, executing typeof(nextplease) in Execute JS returns "object". Same for gBrowser. How can this happen? Any ideas?

    Read the article

  • Adding a font for use in ReportLab

    - by Jimmy McCarthy
    I'm trying to add a font to the python ReportLab so that I can use it for a function. The function is using canvas.Canvas to draw a bunch of text in a PDF, nothing complicated, but I need to add a fixed width font for layout issues. When I tried to register a font using what little info I could find, that seemed to work. But when I tried to call .addFont('fontname') from my Canvas object I keep getting "PDFDocument instance has no attribute 'addFont'" Is the function just not implemented? How do I get access to fonts other than the 10 or so default ones that are listed in .getAvailableFonts? Thanks. Some example code of what I'm trying to make happen: from reportlab.pdfgen import canvas c = canvas.Canvas('label.pdf') c.addFont('TestFont') #This throws the error listed above, regardless of what argument I use (whether it refers to a font or not). c.drawString(1,1,'test data here') c.showPage() c.save() To register the font, I tried from reportlab.lib.fonts import addMapping from reportlab.pdfbase import pdfmetrics pdfmetrics.registerFont(TTFont('TestFont', 'ghettomarquee.ttf')) addMapping('TestFont', 0, 0, 'TestFont') where 'ghettomarquee.ttf' was just a random font I had lying around.

    Read the article

  • shared memory STL maps

    - by user306963
    Hello, I am writing an Apache module in C++. I need to store the common data that all childs need to read as a portion of shared memory. Structure is kind of map of vectors, so I want to use STL map and vectors for it. I have written a shared allocator and a shared manager for the purpose, they work fine for vectors but not for maps, below is the example: typedef vector<CustomersData, SharedAllocator<CustomersData> > CustomerVector; CustomerVector spData; //this one works fine typedef SharedAllocator< pair< const int, CustomerVector > > PairAllocator; typedef map< int, CustomerVector, less<int>, PairAllocator > SharedMap; SharedMap spIndex; //this one doesn't work I get compile time errors when I try to use the second object (spIndex), which are someting like: ../SpatialIndex.h:97: error: '((SpatialIndex*)this)-SpatialIndex::spIndex' does not have class type It looks like the compiler cannot determine a type for SharedMap template type, which is strange in my opinion, it seems to me that all the template parameters have been specified. Can you help? Thanks Benvenuto

    Read the article

  • How do you replicate changes from one excel sheet to another in two separate excel apps?

    - by incognick
    This is all in C# .NET Excel Interop Automation for Office 2007. Say you create two excel apps and open the same workbook for each application: app = new Excel.ApplicationClass(); app2 = new Excel.ApplicationClass(); string fileLocation = "myBook.xslx"; workbook = app.Workbooks.Open(fileLocation, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); workbook2 = app2.Workbooks.Open(fileLocation, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); Now, I want to replicate any changes that occur in workbook2, into workbook. I figured out I can hook up the SheetChanged event to capture cell changes: app.SheetChange += new Microsoft.Office.Interop.Excel.AppEvents_SheetChangeEventHandler(app_SheetChange); void app_SheetChange(object Sh, Microsoft.Office.Interop.Excel.Range Target) { Excel.Worksheet sheetReadOnly = (Excel.Worksheet)Sh; string changedRange = Target.get_Address(missing, missing, Excel.XlReferenceStyle.xlA1, missing, missing); Console.WriteLine("The value of " + sheetReadOnly.Name + ":" + changedRange + " was changed to = " + Target.Value2); Excel.Worksheet sheet = workbook.Worksheets[sheetReadOnly.Index] as Excel.Worksheet; Excel.Range range = sheet.get_Range(changedRange, missing); range.Value2 = Target.Value2; } How do you capture calculate changes? I can hook onto the calculate event but the only thing that is passed is the sheet, not the cells that were updated. I tried forcing an app.Calculate() or app.CalculateFullRebuild() but nothing updates in the other application. The change event does not get fired when formulas change (i.e. a slider control causes a SheetCalculate event and not a SheetChange event) Is there a way to see what formulas were updated? Or is there an easier way to sync two workbooks programmatically in real time?

    Read the article

  • Connection reset when calling disconnect() using enterprisedt's ftp java framework

    - by Frederik Wordenskjold
    I'm having trouble disconnecting from a ftp-server, using the enterprisedt java ftp framework. I can simply not call disconnect() on a FileTransferClient object without getting an error. I do not do anything, besides connecting to the server, and then disconnecting: // create client log.info("Creating FTP client"); ftp = new FileTransferClient(); // set remote host log.info("Setting remote host"); ftp.setRemoteHost(host); ftp.setUserName(username); ftp.setPassword(password); // connect to the server log.info("Connecting to server " + host); ftp.connect(); log.info("Connected and logged in to server " + host); // Shut down client log.info("Quitting client"); ftp.disconnect(); log.info("Example complete"); When running this, the log reads: INFO [test] 28 maj 2010 16:57:20.216 : Creating FTP client INFO [test] 28 maj 2010 16:57:20.263 : Setting remote host INFO [test] 28 maj 2010 16:57:20.263 : Connecting to server x INFO [test] 28 maj 2010 16:57:20.979 : Connected and logged in to server x INFO [test] 28 maj 2010 16:57:20.979 : Quitting client ERROR [FTPControlSocket] 28 maj 2010 16:57:21.026 : Read failed ('' read so far) And the stacktrace: com.enterprisedt.net.ftp.ControlChannelIOException: Connection reset at com.enterprisedt.net.ftp.FTPControlSocket.readLine(FTPControlSocket.java:1029) at com.enterprisedt.net.ftp.FTPControlSocket.readReply(FTPControlSocket.java:1089) at com.enterprisedt.net.ftp.FTPControlSocket.sendCommand(FTPControlSocket.java:988) at com.enterprisedt.net.ftp.FTPClient.quit(FTPClient.java:4044) at com.enterprisedt.net.ftp.FileTransferClient.disconnect(FileTransferClient.java:1034) at test.main(test.java:46) It should be noted, that I without problems can connect, and do stuff with the server, like getting a list of files in the current working directory. But I cant, for some reason, disconnect! I've tried using both active and passive mode. The above example is by the way copy/pasted from their own example. I cannot fint ANYTHING related to this by doing a Google-search, so I was hoping you have any suggestions, or experience with this issue.

    Read the article

< Previous Page | 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009  | Next Page >