Search Results

Search found 3534 results on 142 pages for 'sets'.

Page 92/142 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • How to translate CCSID 65535 in SQuirrel from a DB2 on an iseries

    - by ZS6JCE
    I am new to SQuirrel SQL. I need some help to translating CCSID 65535 into ASCII, UNICODE (or anything human readable) I am using the JDBC driver per the following guide. According to IBM's website: What character conversion issues must my program deal with? The IBM i database uses EBCDIC to store text. Java uses Unicode. The JDBC driver handles all conversion between character sets, so your program should not have to worry about it. but I think they refer to CCSID 37 and not 65535(Hex). I have got the following info, from my DB2 DB Doing DSPFD gives me: Coded character set identifier . . . . . . : CCSID 65535 Doing DSPFFD gives me: TXT CHAR 3 3 41 Both Text Field text . . . . . . . . . . . . . . . : Text Coded Character Set Identifier . . . . . : 65535 But the SQuirrel query result for the TXT field is: 5c c1 c4 c4 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 c1 40 7e 40 c2 40 4e 40 c3 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 Which should be translated to something like: *ADD A = B + C

    Read the article

  • What is the scope of CONTEXT_INFO in SQL Server?

    - by JasonS
    I am using CONTEXT_INFO to pass a username to a delete trigger for the purposes of an audit/history table. I'm trying to understand the scope of CONTEXT_INFO and if I am creating a potential race condition. Each of my database tables has a stored proc to handle deletes. The delete stored proc takes userId as an parameter, and sets CONTEXT_INFO to the userId. My delete trigger then grabs the CONTEXT_INFO and uses that to update an audit table that indicates who deleted the row(s). The question is, if two deletes sprocs from different users are executing at the same time, can CONTEXT_INFO set in one of the sprocs be consumed by the trigger fired by the other sproc? I've seen this article http://msdn.microsoft.com/en-us/library/ms189252.aspx but I'm not clear on the scope of sessions and batches in SQL Server which is key to the article being helpful! I'd post code, but short on time at the moment. I'll edit later if this isn't clear enough. Thanks in advance for any help.

    Read the article

  • Rails test db doesn't persist record changes

    - by nathan.f77
    I've been trying to solve a problem for a few weeks now. I am running rspec tests for my Rails app, and they are working fine except for one error that I can't seem get my head around. I am using MySQL with the InnoDB engine. I have set config.use_transactional_fixtures = true in spec_helper.rb I load my test fixtures manually with the command rake spec:db:fixtures:load. The rspec test is being written for a BackgrounDRb worker, and it is testing that a record can have its state updated (through the state_machine gem). Here is my problem: I have a model called Listings. The rspec test calls the update_sold_items method within a file called listing_worker.rb. This method calls listing.sell for a particular record, which sets the listing record's 'state' column to 'sold'. So far, this is all working fine, but when the update_sold_items method finishes, my rspec test fails here: listing = Listing.find_by_listing_id(listing_id) listing.state.should == "sold" expected: "sold", got: "current" (using ==) I've been trying to track down why the state change is not persisting, but am pretty much lost. Here is the result of some debugging code that I placed in the update_sold_items method during the test: pp listing.state # => "current" listing.sell! listing.save! pp listing.state # => "sold" listing.reload pp listing.state # => "current" I cannot understand why it saves perfectly fine, but then reverts back to the original record whenever I call reload, or Listing.find etc. Thanks for reading this, and please ask any questions if I haven't given enough information. Thanks for your help, Nathan B P.S. I don't have a problem creating new records for other classes, and testing those records. It only seems to be a problem when I am updating records that already exist in the database.

    Read the article

  • Preserving Language across inline Calculated Members in SSAS

    - by Tullo
    Problem: I need to retrieve the language of a given cell from the cube. The cell is defined by code-generated MDX, which can have an arbitrary level of indirection as far as calculated members and sets go (defined in the WITH clause). SSAS appears to ignore the Language of the specified members when you declare a calculated member inline in the query. Example: The cube's default locale is 1033 (en-US) The cube contains a Calculated Measure called [Net Pounds] which is defined as [Net Amt], language=2057 (en-GB) The query requests this measure alongside an inline calculated measure which is simply an alias to the [Net Pounds] When used directly, the measure is formatted in the en-GB locale, but when aliased, the measure falls back to using the cube default of en-US. Here's what the query looks like: WITH MEMBER [Measures].[Pounds Indirect] AS [Measures].[Net Pounds] SELECT { [Measures].[Pounds Indirect], [Measures].[Net Pounds] } ON AXIS (0) FROM [Cube] CELL PROPERTIES language, value, formatted_value The query returns the expected two cells, but only uses the [Net Pounds] locale when used directly. Is there an option or switch somewhere in SSAS that will allow locale information to be visible in calculated members? I realise that it is possible to declare the inline calculated member in a particular locale, but this would involve extracting the locale from the tuple first, which (since the cube's member is isolated in the application's query schema) is unknown.

    Read the article

  • CALayer won't display

    - by Paul from Boston
    I'm trying to learn how to use CALayers for a project and am having trouble getting sublayers to display. I created a vanilla View-based iPhone app in XCode for these tests. The only real code is in the ViewController which sets up the layers and their delegates. There is a delegate, DelegateMainView, for the viewController's view layer and a second different one, DelegateStripeLayer, for an additional layer. The ViewController code is all in awakeFromNib, - (void)awakeFromNib { DelegateMainView *oknDelegate = [[DelegateMainView alloc] init]; self.view.layer.delegate = oknDelegate; CALayer *newLayer = [CALayer layer]; DelegateStripeLayer *sldDelegate = [[DelegateStripeLayer alloc] init]; newLayer.delegate = sldDelegate; [self.view.layer addSublayer:newLayer]; [newLayer setNeedsDisplay]; [self.view.layer setNeedsDisplay]; } The two different delegates are simply wrappers for the CALayer delegate method, drawLayer:inContext:, i.e., - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context { CGRect bounds = CGContextGetClipBoundingBox(context); ... do some stuff here ... CGContextStrokePath(context); } each a bit different. The layer, view.layer, is drawn properly but newLayer is never drawn. If I put breakpoints in the two delegates, the program stops in DelegateMainView but never reaches DelegateStripeLayer. What am I missing here? Thanks.

    Read the article

  • Efficient Multiple Linear Regression in C# / .Net

    - by mrnye
    Does anyone know of an efficient way to do multiple linear regression in C#, where the number of simultaneous equations may be in the 1000's (with 3 or 4 different inputs). After reading this article on multiple linear regression I tried implementing it with a matrix equation: Matrix y = new Matrix( new double[,]{{745}, {895}, {442}, {440}, {1598}}); Matrix x = new Matrix( new double[,]{{1, 36, 66}, {1, 37, 68}, {1, 47, 64}, {1, 32, 53}, {1, 1, 101}}); Matrix b = (x.Transpose() * x).Inverse() * x.Transpose() * y; for (int i = 0; i < b.Rows; i++) { Trace.WriteLine("INFO: " + b[i, 0].ToDouble()); } However it does not scale well to the scale of 1000's of equations due to the matrix inversion operation. I can call the R language and use that, however I was hoping there would be a pure .Net solution which will scale to these large sets. Any suggestions? EDIT #1: I have settled using R for the time being. By using statconn (downloaded here) I have found it to be both fast & relatively easy to use this method. I.e. here is a small code snippet, it really isn't much code at all to use the R statconn library (note: this is not all the code!). _StatConn.EvaluateNoReturn(string.Format("output <- lm({0})", equation)); object intercept = _StatConn.Evaluate("coefficients(output)['(Intercept)']"); parameters[0] = (double)intercept; for (int i = 0; i < xColCount; i++) { object parameter = _StatConn.Evaluate(string.Format("coefficients(output)['x{0}']", i)); parameters[i + 1] = (double)parameter; }

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • How can I Fail a WebTest?

    - by craigb
    I'm using Microsoft WebTest and want to be able to do something similar to NUnit's Assert.Fail(). The best i have come up with is to throw new webTestException() but this shows in the test results as an Error rather than a Failure. Other than reflecting on the WebTest to set a private member variable to indicate the failure, is there something I've missed? EDIT: I have also used the Assert.Fail() method, but this still shows up as an error rather than a failure when used from within WebTest, and the Outcome property is read-only (has no public setter). EDIT: well now I'm really stumped. I used reflection to set the Outcome property to Failed but the test still passes! Here's the code that sets the Oucome to failed: public static class WebTestExtensions { public static void Fail(this WebTest test) { var method = test.GetType().GetMethod("set_Outcome", BindingFlags.NonPublic | BindingFlags.Instance); method.Invoke(test, new object[] {Outcome.Fail}); } } and here's the code that I'm trying to fail: public override IEnumerator<WebTestRequest> GetRequestEnumerator() { this.Fail(); yield return new WebTestRequest("http://google.com"); } Outcome is getting set to Oucome.Fail but apparently the WebTest framework doesn't really use this to determine test pass/fail results.

    Read the article

  • Versioning freindly, extendible binary file format

    - by Bas Bossink
    In the project I'm currently working on there is a need to save a sizeable data structure to disk. Being in optimist I thought their must be a standard solution for such a problem however upto now I haven't found a solution that satisfies the following requirements: .net 2.0 support, preferably with a foss implementation version friendly (this should be interpreted as reading an old version of the format should be relatively simple if the changes in the underlying data structure are simple, say adding/dropping fields) ability to do some form of random access where part of the data can be extended after initial creation (think of this as extending intermediate results) space and time efficient (xml has been excluded as option given this requierement) Options considered so far: Protocol Buffers : was turned down by verdict of the documentation about Large Data Sets since this comment suggest adding another layer on top, this would call for additional complexity which I wish to have handled by the file format itself. HDF5,EXI : do not seem to have .net implementations SQLite : the data structure at hand would result in a pretty complex table structure that seems to heavyweight for the intended use BSON : does not appear to support requirement 3. Fast Infoset : only seems to have buyware .net implementations Any recommendations or pointers are greatly appreciated. Furthermore if you believe any of the information above is not true please provide pointers/examples to proove me wrong.

    Read the article

  • Starting an Erlang slave node in escript fails when using custom Erlang in Ubuntu 10.4

    - by Adam Lindberg
    I have the following escript: #!/usr/bin/env escript %%! -name [email protected] main(_) -> NodeName = test, Host = '127.0.0.1', Args = "", {ok, _Node} = slave:start_link(Host, NodeName, Args), io:format("Node started successfully!"). When running it on Ubuntu 10.04 I get this: $ ./start_slave Node started successfully! $ I want to install my own Erlang (latest version, debug compiled files for dialyzer etc) since the stock install of Erlang on Ubuntu lacks some features. I put my Erlang binaries inside ~/Applications/bin. Starting Erlang normally works, and starting slave nodes inside an Erlang shell works as well. However, now my escript doesn't work. After about 60 seconds it returns an error: $ ./start_slave escript: exception error: no match of right hand side value {error,timeout} Even if I change the first line to the escript to use my erlang version, it still does not work: #!/home/user/Applications/bin/escript The slave node is started with a call to erlang:open_port/2 which seems to be using sh which in turn does not read my .bashrc file that sets my custom PATH environment variable. The timeout seems to occur when slave:start_link/3 waits for the slave node to respond, which it never does. How can I roll my own installation of Erlang and start slave nodes inside escripts on Ubuntu 10.4?

    Read the article

  • MySQL use certain columns, based on other columns

    - by Rabbott
    I have this query: SELECT COUNT(articles.id) AS count FROM articles, xml_documents, streams WHERE articles.xml_document_id = xml_documents.id AND xml_documents.stream_id = streams.id AND articles.published_at BETWEEN '2010-01-01' AND '2010-04-01' AND streams.brand_id = 7 Which just uses the default equajoin by specifying three tables in csv format in the FROM clause.. What I need to do is group this by a value found within articles.source (raw xml).. so it could turn into this: SELECT COUNT(articles.id) AS count, ExtractValue(articles.source, "/article/media_type") AS media_type FROM articles, xml_documents, streams WHERE articles.xml_document_id = xml_documents.id AND xml_documents.stream_id = streams.id AND articles.published_at BETWEEN '2010-01-01' AND '2010-04-01' AND streams.brand_id = 7 GROUP BY media_type which works fine, the problem is, I'm using rails, and using STI for the xml_documents table. The articles.source that is provided to the ExtractValue method will be of a couple different formats.. So what I need to be able to do is use "/article/media_type" IF xml_documents.type = 'source one' and use "/article/source" if xml_documents.type = 'source two' This is just because the two document types format their XML differently, but I don't want to have to run multiple queries to retrieve this information.. It would be nice if one could use a ternary operator, but i don't think this is possible.. EDIT At this Point I am looking at making a temp table, or simply using UNION to place multiple result sets together..

    Read the article

  • Images in the project NOT adding to array - don't know why.

    - by Sam Jarman
    Hi there, I have to sets of images. Each set has 16 images. One set is called 0.png through to 15.png the other is a0.png through to a15.png. In my app, it loads each one dependent on a variable (which by logging, I have proved it works) here is the code [MemoryManager sharedMemoryManager]; NSLog(@"THEME: %@", [MemoryManager sharedMemoryManager].themeName); imageArray = [[NSMutableArray alloc] init]; if([MemoryManager sharedMemoryManager].themeName == @"hand"){ NSLog(@"Here 2"); [imageArray addObject:[UIImage imageNamed:@"0.png"]]; // [imageArray addObject:[UIImage imageNamed:@"1.png"]];//1 [imageArray addObject:[UIImage imageNamed:@"2.png"]];//2 [imageArray addObject:[UIImage imageNamed:@"3.png"]];//3 [imageArray addObject:[UIImage imageNamed:@"4.png"]];//4 [imageArray addObject:[UIImage imageNamed:@"5.png"]];//5 [imageArray addObject:[UIImage imageNamed:@"6.png"]];//6 [imageArray addObject:[UIImage imageNamed:@"7.png"]];//7 [imageArray addObject:[UIImage imageNamed:@"8.png"]];//8 [imageArray addObject:[UIImage imageNamed:@"9.png"]];//9 [imageArray addObject:[UIImage imageNamed:@"10.png"]];//10 [imageArray addObject:[UIImage imageNamed:@"11.png"]];//11 [imageArray addObject:[UIImage imageNamed:@"12.png"]];//12 [imageArray addObject:[UIImage imageNamed:@"13.png"]];//13 [imageArray addObject:[UIImage imageNamed:@"14.png"]];//14 [imageArray addObject:[UIImage imageNamed:@"15.png"]];//15 } if([MemoryManager sharedMemoryManager].themeName == @"letters"){ NSLog(@"Here 3"); //[imageArray removeAllObjects]; [imageArray addObject:[UIImage imageNamed:@"a0.png"]]; // [imageArray addObject:[UIImage imageNamed:@"a1.png"]];//1 [imageArray addObject:[UIImage imageNamed:@"a2.png"]];//2 [imageArray addObject:[UIImage imageNamed:@"a3.png"]];//3 [imageArray addObject:[UIImage imageNamed:@"a4.png"]];//4 [imageArray addObject:[UIImage imageNamed:@"a5.png"]];//5 [imageArray addObject:[UIImage imageNamed:@"a6.png"]];//6 [imageArray addObject:[UIImage imageNamed:@"a7.png"]];//7 [imageArray addObject:[UIImage imageNamed:@"a8.png"]];//8 [imageArray addObject:[UIImage imageNamed:@"a9.png"]];//9 [imageArray addObject:[UIImage imageNamed:@"a10.png"]];//10 [imageArray addObject:[UIImage imageNamed:@"a11.png"]];//11 [imageArray addObject:[UIImage imageNamed:@"a12.png"]];//12 [imageArray addObject:[UIImage imageNamed:@"a13.png"]];//13 [imageArray addObject:[UIImage imageNamed:@"a14.png"]];//14 [imageArray addObject:[UIImage imageNamed:@"a15.png"]];//15 NSLog(@"Here 4"); } The log says 2010-05-26 21:30:57.092 Memory[22155:207] Here 1 2010-05-26 21:30:57.093 Memory[22155:207] THEME: letters 2010-05-26 21:30:57.095 Memory[22155:207] Here 3 2010-05-26 21:30:57.109 Memory[22155:207] Here 4 The images are in the same folder the .xproj file is. They simply is just not working. Any ideas? Cheers

    Read the article

  • How to disable Power off/Power on sound on Android phone? [closed]

    - by yvolk
    My Android phone (Android v.2.1) produces loud sound both before Power off and after Power on (boot) is being completed. I don't want to turn the sound off before reboot/powering it off and to turn sound on after reboot every time I'm restarting my phone. I just want to get rid of these sounds forever :-) How can I do this (maybe executing some script, changing some properties... preferably without rooting my phone)? Update: Thank you Christopher Orr! I've also found some information that partially allows to achieve what I'm looking for (on androidforums.com): "if you are going into SettingsSound & displayNotification ringtone that sets your notification tone for app updates, power on (rebooting) etc. If you want to change notification for messages you would have to go into messaging and then hit the menu key for notification option. You can do the same for email accounts as well." So I disabled Power on/off sounds (plus something else... but I didn't notice undesired side effect yet...) with built-in settings.

    Read the article

  • gcc: Do I need -D_REENTRANT with pthreads?

    - by stefanB
    On Linux (kernel 2.6.5) our build system calls gcc with -D_REENTRANT. Is this still required when using pthreads? How is it related to gcc -pthread option? I understand that I should use -pthread with pthreads, do I still need -D_REENTRANT? On a side note, is there any difference that you know off between the usage of REENTRANT between gcc 3.3.3 and gcc 4.x.x ? When I use -pthread gcc option I can see that _REENTRANT gets defined. Will omitting -D_REENTRANT from command line make any difference, for example could some objects be compiled without multithreaded support and then linked into binary that uses pthreads and will cause problems? I assume it should be ok just to use: g++ -pthread > echo | g++ -E -dM -c - > singlethreaded > echo | g++ -pthread -E -dM -c - > multithreaded > diff singlethreaded multithreaded 39a40 > #define _REENTRANT 1 We're compiling multiple static libraries and applications that link with the static libraries, both libraries and application use pthreads. I believe it was required at some stage in the past but want to know if it is still required. Googling hasn't returned any recent information mentioning -D_REENTRANT with pthreads. Could you point me to links or references discussing the use in recent version of kernel/gcc/pthread? Clarification: At the moment we're using -D_REENTRANT and -lpthread, I assume I can replace them with just g++ -pthread, looking at man gcc it sets the flags for both preprocessor and linker. Any thoughts?

    Read the article

  • GXT LayoutContainer with scrollbar reports a client height value which includes the area below the s

    - by Pieter Breed
    I have this code which sets up a "main" container into which other modules of the application will go. LayoutContainer c = new LayoutContainer(); c.setScrollMode(Scroll.ALWAYS); parentContainer.add(c, <...>); Then later on, I have the following as an event handler pContainer = c; // pContainer is actually a parameter, but it has c's value pContainer.removeAll(); pContainer.setLayout(new FitLayout()); LayoutContainer wrapperContainer = new LayoutContainer(); wrapperContainer.setLayout(new BorderLayout()); wrapperContainer.setBorders(false); pContainer.add(wrapperContainer); LayoutContainer west = pWestContentContainer; BorderLayoutData westLayoutData = new BorderLayoutData(LayoutRegion.WEST); westLayoutData.setSize(pWidth); westLayoutData.setSplit(true); wrapperContainer.add(west, westLayoutData); LayoutContainer center = new LayoutContainer(); wrapperContainer.add(center, new BorderLayoutData(LayoutRegion.CENTER)); pCallback.withSplitContainer(center); pContainer.layout(); So in effect, the container called 'west' here will be where the module's UI gets displayed. That module UI then does a simple rowlayout with two children. The botton child has RowData(1, 1) so it fills up all the available space. My problem is that the c (parent) container reports a height and width value which includes the value underneath the scrollbars. What I would like is that the scrollbars show all the space excluding their own space. This is a screenshot showing what I mean:

    Read the article

  • MSBuild fails, but building inside Visual Studio works fine

    - by Matt
    C#, .NET 2.0 I have an ASP.NET website in a solution, with 2 other projects (used as library references). When I build (debug or release) in Visual Studio, everything works fine. However, building with MSBuild fails. This build had been working (it's actually invoked via a nAnt task). The only thing that has changed is that I have a new user control whose Type I am referencing in my code behind. The offending code is in my ASPX code behind. MessageAlert is the UserControl: MessageAlert userControl = this.LoadControl("~/UserControls/MessageAlert.ascx") as MessageAlert; userControl.UserMessage = message; this.UserMessages.Controls.Add(userControl); In order to get Visual Studio to recognize the type 'MessageAlert' I had to: 1) Set the ClassName="MessageAlert" in the @Control markup at the top of the user control (because using the auto-generated UserControls_MessageAlert wasn't working either) 2) Register the user control in the markup of my ASPX, using an @Register 3) Add a "using ASP" to the top of my code behind After those steps, I could successfully reference the MessageAlert type in my codebehind from visual studio. But from MSBuild I get "The type or namespace name 'MessageAlert' could not be found (are you missing a using directive or an assembly reference?) " The MSBuild execution is very simple - it points the the very same solution file and sets the configuration property to release. It seems, based on the # of steps I had to go through to get Type references to MessageAlert in Visual Studio, that there is something missing in the MSBuild process. But what? Doesn't Visual Studio in fact invoke MSBuild behind the scenes? Is there a better way to reference a UserControl type in the code behind of an ASPX? EDIT: To clarify, the MessageAlert user control is not in the other referenced assemblies/projects. I mentioned them because, together with the website, the compose the Solution file, which is the same sln file being built by MS Build.

    Read the article

  • Delegates in .NET: how are they constructed ?

    - by Saulius
    While inspecting delegates in C# and .NET in general, I noticed some interesting facts: Creating a delegate in C# creates a class derived from MulticastDelegate with a constructor: .method public hidebysig specialname rtspecialname instance void .ctor(object 'object', native int 'method') runtime managed { } Meaning that it expects the instance and a pointer to the method. Yet the syntax of constructing a delegate in C# suggests that it has a constructor new MyDelegate(int () target) where I can recognise int () as a function instance (int *target() would be a function pointer in C++). So obviously the C# compiler picks out the correct method from the method group defined by the function name and constructs the delegate. So the first question would be, where does the C# compiler (or Visual Studio, to be precise) pick this constructor signature from ? I did not notice any special attributes or something that would make a distinction. Is this some sort of compiler/visualstudio magic ? If not, is the T (args) target construction valid in C# ? I did not manage to get anything with it to compile, e.g.: int () target = MyMethod; is invalid, so is doing anything with MyMetod, e.g. calling .ToString() on it (well this does make some sense, since that is technically a method group, but I imagine it should be possible to explicitly pick out a method by casting, e.g. (int())MyFunction. So is all of this purely compiler magic ? Looking at the construction through reflector reveals yet another syntax: Func CS$1$0000 = new Func(null, (IntPtr) Foo); This is consistent with the disassembled constructor signature, yet this does not compile! One final interesting note is that the classes Delegate and MulticastDelegate have yet another sets of constructors: .method family hidebysig specialname rtspecialname instance void .ctor(class System.Type target, string 'method') cil managed Where does the transition from an instance and method pointer to a type and a string method name occur ? Can this be explained by the runtime managed keywords in the custom delegate constructor signature, i.e. does the runtime do it's job here ?

    Read the article

  • CSS for toolbar with UI Slider centered between left and right buttons

    - by Tauren
    I'm attempting to create a 100% width toolbar. This toolbar needs to have a variable number of buttons aligned to the left side, as well as a variable number of buttons aligned to the right. That's the easy part. But now I want to put a jQuery UI slider in the center that takes up the full remaining space between the buttons on the left and the buttons on the right. I'm having troubles figuring out a pure-CSS way of doing this. I've tried something like below, but I really don't want to have fixed percentage widths. If there is only one button on the left and one on the right, then I want the centered slider to take the full space between them, not just 33% of the full width. .toolbar {width: 100%;} .toolbar .toolbar-left {float: left;width: 33%;} .toolbar .toolbar-right {float: right;width: 33%;} .toolbar .toolbar-center {margin: 0 auto;width: 33%;} I'm using UI Buttons for my buttons and styling -- see an example. In that example, there is a toolbar that is the full width of the page. Imagine the two right most sets of buttons being aligned to the right of the toolbar. Then in the middle empty space, I want to put a UI Slider, and use all the space between buttons (minus some padding). Is there a way to do this with CSS, or will I need to whip up some javascript to position things properly?

    Read the article

  • Property being immediately reset by ApplicationSetting Property Binding

    - by Slider345
    I have a .net 2.0 windows application written in c#, which currently uses several project settings to store user configurations. The forms in the application are made up of lots of user controls, each of which have properties that need to be set to these project settings. Right now these settings are manually assigned to the user control properties. I was hoping to simplify the code by replacing the manual implementation with ApplicationSettings Property Bindings. However, my first property is not behaving properly at all. The setting is an integer, used to record a port number typed into a text box. The setting is bound to an integer property on a user control, and that property sets the Text property on a TextBox control. When I type a new value into the textbox at runtime, as soon as the textbox loses focus, it is immediately replaced by the original value. A breakpoint on the property shows that it is immediately setting the property to the setting from the properties collection after I set it. Can anyone see what I'm doing wrong? Here's some code: The setting: [global::System.Configuration.UserScopedSettingAttribute()] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Configuration.DefaultSettingValueAttribute("1000")] public int Port { get{ return ((int)(this["Port"])); } set{ this["Port"] = value; } } The binding: this.ctrlNetworkConfig.DataBindings.Add(new System.Windows.Forms.Binding("PortNumber", global::TestProject.Properties.Settings.Default, "Port", true, System.Windows.Forms.DataSourceUpdateMode.OnPropertyChanged)); this.ctrlNetworkConfig.PortNumber = global::TestProject.Properties.Settings.Default.Port; And lastly, the property on the user control: public int PortNumber { get{ int port; if(int.TryParse(this.txtPortNumber.Text, out port)) return port; else return 0; } set{ txtPortNumber.Text = value.ToString(); } } Any thoughts? Thanks in advance for your help. EDIT: Sorry about the formatting, trying to correct.

    Read the article

  • UiModeManager - NightMode (FroYo)

    - by Kaloer
    Hi there, I have been trying to turn off the buttons' light in my application using the UiModeManager's nightmode function. The default Desk Clock application (Nexus One) turns off the buttons' light when it is dimmed, and I want to do this as well. I've tried using the following code: UiModeManager mgr = (UiModeManager) getSystemService(UI_MODE_SERVICE); mgr.setNightMode(UiModeManager.MODE_NIGHT_YES); The UiModeManager.setNightMode(int mode) documentation says this: Sets the night mode. Changes to the night mode are only effective when the car or desk mode is enabled on a device. Does that mean that the device has to be physically in a desk dock? I can set the device to car mode using the UiModeManager.enableCarMode(int flags) method. This works fine, but it doesn't turn off the lights, it only dims the screen's backlight. Is there a way to set the device into desk mode without using a physical desk dock? As the FroYo source code is not yet released, I cannot look at the build-in Desk Clock application. Thanks in advantage.

    Read the article

  • Why is Dispatcher.Invoke not triggering UI update?

    - by Brandon
    I am trying to reuse a UserControl and also borrow some logic that keeps track of progress. I'll try and simplify things. MyWindow.xaml includes a MyUserControl. MyUserControl has its own progress indicator (Formatting in progress..., Copying files..., etc.) and I'd like to mirror this progress somewhere in the MyWindow form. But, the user control has some logic I don't quite understand. I've read and read but I still don't understand the Dispatcher. Here's a summary of the logic in the user control that updates the progress. this.Dispatcher.Invoke(DispatcherPriority.Input, (Action)(() => { DAProgressIndicator = InfiniteProgress.AddNewInstanceToControl(StatusGrid, new SolidColorBrush(new Color() { A = 170, R = 128, G = 128, B = 128 }), string.Empty); DAProgressIndicator.Message = MediaCardAdminRes.ActivatingCard; ActivateInProgress = true; })); I thought I'd be smart and add an event to MyUserControl that would be called in the ActivateInProgress property set logic. public bool ActivateInProgress { get { return _activateInProgress; } set { _activateInProgress = value; if (ActivateInProgressHandler != null) { ActivateInProgressHandler(value); } } } I'm setting the ActivateInProgressHandler within the MyWindow constructor to the following method that sets the view model property that is used for the window's own progress indicator. private void SetActivation(bool activateInProgress) { viewModel.ActivationInProgress = activateInProgress; } However, the window's progress indicator never changes. So, I'm convinced that the Dispatcher.Invoke is doing something that I don't understand. If I put a message box inside the SetActivation method, the thread blocks and the window's progress indicator is updated. I understand basic threads but this whole Dispatcher thing is new to me. What am I missing?

    Read the article

  • Salesforce/PHP - outbound messages (SOAP) - memory limit issue

    - by Phill Pafford
    I'm using Salesforce to send outbound messages (via SOAP) to another server. The server can process about 8 messages at a time, but will not send back the ACK file if the SOAP request contains more than 8 messages. SF can send up to 100 outbound messages in 1 SOAP request and I think this is causing a memory issue with PHP. If I process the outbound messages 1 by 1 they all go through fine, I can even do 8 at a time with no issues. But larger sets are not working. ERROR in SF: org.xml.sax.SAXParseException: Premature end of file Looking in the HTTP error logs I see that the incoming SOAP message looks to be getting cut of which throws a PHP warning stating: Premature end of data in tag ... PHP Fatal error: Call to a member function getAttribute() on a non-object This leads me to believe that PHP is having a memory issue and can not parse the incoming message due to it's size. I was thinking I could just set: ini_set('memory_limit', '64M'); But would this be the correct approach? Is there a way I could set this to increase with the incoming SOAP request dynamically? UPDATE: Adding some code $data = fopen('php://input','rb'); $headers = getallheaders(); $content_length = $headers['Content-Length']; $buffer_length = 1000; $fread_length = $content_length + $buffer_length; $content = fread($data,$fread_length); /** * Parse values from soap string into DOM XML */ $dom = new DOMDocument(); $dom->loadXML($content); ....

    Read the article

  • OJB Reference Descriptor 1:0 relationship? Should I set auto-retrieve to false?

    - by godzillasdm
    Hi, I am having an issue while using Apache OJB with Spring 2 inside my web app. I'm using OJB reference-descriptor with 2 foreign key properties. I have an object A (parent) and object B (referenced object). The thing is, for an object A, there may or may not be an object B. In the case where there is no object B to go with Object A, the object B seems to be instantiated (through Spring?) anyways. However, I am unable to access object B's members. Whenever I test if Object B == null, it always returns false even though there is no matching value in the database. Since this Object is never null, I figured I can test the object's member like so: if( objectb.getDocumentNumber == null) { return false; } However, I get an exception in the jsp: javax.servlet.jsp.el.ELException: An error occurred while getting property "documentNumber" from an instance class org.sample.pojo.Objectb$$EnhancerByCGLIB$$78022a2 and this exception in the debugger when it's creating the objectB: com.sun.jdi.InvocationException occurred invoking method. I am guessing that the reference-descriptor must be a 1:1+ relationship, instead of a 1:0+ relationship. I was wondering if I should set the property 'auto-retrieve' to false, and then use the PersistenceBroker.retrieveAllReferences(Object obj); method as directed. However, this method's return value is 'void', so I am guessing that Spring somehow creates, and sets the reference class for me. (Returning me back to the same issue I'm having.) I will need a way to test whether the reference object exists first. If not, don't call this retrieveAllReferences method, but I don't see how. Am I going about this all wrong? Does reference-descriptor not allow 1:0 relations? Any work around to my problem? Your suggestions are greatly appreciated!

    Read the article

  • Crystal report subreport parameter autobinding logic feeds duplicate parameters to different subrepo

    - by quillbreaker
    I have a report, and I place the same subreport within the footer twice. The report has three parameters and the subreport has four parameters. When I try to run the report through the report designer, it prompts me for seven parameters instead of the eleven I was hoping for. It prompts for one set of parameters for the subreport (with a default prompt of @parameter(subreport.rpt)/@parameter(subreport.rpt - 01), and passes the same set of parameters to both subreports. This isn't what I want the report to do. Furthermore, if I look at 'show report parameters', it does show eleven parameters, with the same value for both subreport parameter sets. So it knows that it's two different subreports, but it does not want to let me enter it that way. Is there some way I can make crystal designer realize that it should take different values for each subreport? The only solution I've found is to add 8 more parameters, one for each subreport / subreport parameter combination, and bind them individually. It works, but it feels like a workaround. Does anyone have a better solution?

    Read the article

  • How do I hook into the action method for an iPad popover toolbar button?

    - by Elisabeth
    Hi, I am using the split view template to create a simple split view that has, of course, a popover in Portrait mode. I'm using the default code generated by template that adds/removes the toolbar item and sets the popover controller and removes it. These two methods are splitViewController:willShowViewController:... and splitViewController:willHideViewController:... I'm trying to figure out how to make the popover disappear if the user taps on the toolbar button while the popover is displayed. You can make the popover disappear without selecting an item if you tap anywhere outside the popover, but I would also like to make it disappear if the user taps the button again. Where I'm stuck is this: there doesn't seem to be an obvious, easy way to hook into the action for the toolbar button. I can tell, using the debugger, that the action that's being called on the button is showMasterInPopover. And I am new to working with selectors programmatically, I admit. Can I somehow write an action and set it on the toolbar item without overriding the action that's already there? e.g. add an action that calls the one that's there now? Or would I have to write an action that shows/hides the popover myself (behavior that's being done behind the scenes presumably by the split view controller now???). Or am I missing an easy way to add this behavior to this button without changing the existing behavior that's being set up for me? Thank you!

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >