Search Results

Search found 5224 results on 209 pages for 'modify'.

Page 158/209 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Data Binding to an object in C#

    - by Allen
    Objective-c/cocoa offers a form of binding where a control's properties (ie text in a textbox) can be bound to the property of an object. I am trying to duplicate this functionality in C# w/ .Net 3.5. I have created the following very simple class in the file MyClass.cs: class MyClass { private string myName; public string MyName { get { return myName; } set { myName = value; } } public MyClass() { myName = "Allen"; } } I also created a simple form with 1 textbox and 1 button. I init'd one instance of Myclass inside the form code and built the project. Using the DataSource Wizard in Vs2008, i selected to create a data source based on object, and selected the MyClass assembly. This created a datasource entity. I changed the databinding of the textbox to this datasource; however, the expected result (that the textbox's contents would be "allen") was not achieved. Further, putting text into the textbox is not updating the name property of the object. I know i'm missing something fundamental here. At some point i should have to tie my instance of the MyClass class that i initialized inside the form code to the textbox, but that hasn't occurred. Everything i've looked at online seems to gloss over using DataBinding with an object (or i'm missing the mark entirely), so any help is great appreciated. ----Edit--- Utilizing what i learned by the answers, i looked at the code generated by Visual Studio, it had the following: this.myClassBindingSource.DataSource = typeof(BindingTest.MyClass); if i comment that out and substitute : this.myClassBindingSource.DataSource = new MyClass(); i get the expected behavior. Why is the default code generated by VS like it is? Assuming this is more correct than the method that works, how should i modify my code to work within the bounds of what VS generated?

    Read the article

  • Converting timestamp to time ago in php e.g 1 day ago, 2 days ago...

    - by cosmicbdog
    hi everyone, i am trying to convert a timestamp of the format: 2009-09-12 20:57:19 and turn it into something like '3 minutes ago' with php. I found a useful script to do this, but I think its looking for a different format to be used as the time variable. The script I'm wanting to modify to work with this format is: function _ago($tm,$rcs = 0) { $cur_tm = time(); $dif = $cur_tm-$tm; $pds = array('second','minute','hour','day','week','month','year','decade'); $lngh = array(1,60,3600,86400,604800,2630880,31570560,315705600); for($v = sizeof($lngh)-1; ($v >= 0)&&(($no = $dif/$lngh[$v])<=1); $v--); if($v < 0) $v = 0; $_tm = $cur_tm-($dif%$lngh[$v]); $no = floor($no); if($no <> 1) $pds[$v] .='s'; $x=sprintf("%d %s ",$no,$pds[$v]); if(($rcs == 1)&&($v >= 1)&&(($cur_tm-$_tm) > 0)) $x .= time_ago($_tm); return $x; } I think on those first few lines its trying to do something that looks like this (different date format math): $dif = 1252809479 - 2009-09-12 20:57:19; How would I go about converting my timestamp into that (unix?) format?

    Read the article

  • How will Arel affect rails' includes() 's capabilities.

    - by Tim Snowhite
    I've looked over the Arel sources, and some of the activerecord sources for Rails 3.0, but I can't seem to glean a good answer for myself as to whether Arel will be changing our ability to use includes(), when constructing queries, for the better. There are instances when one might want to modify the conditions on an activerecord :include query in 2.3.5 and before, for the association records which would be returned. But as far as I know, this is not programmatically tenable for all :include queries: (I know some AR-find-includes make t#{n}.c#{m} renames for all the attributes, and one could conceivably add conditions to these queries to limit the joined sets' results; but others do n_joins + 1 number of queries over the id sets iteratively, and I'm not sure how one might hack AR to edit these iterated queries.) Will Arel allow us to construct ActiveRecord queries which specify the resulting associated model objects when using includes()? Ex: User :has_many posts( has_many :comments) User.all(:include => :posts) #say I wanted the post objects to have their #comment counts loaded without adding a comment_count column to `posts`. #At the post level, one could do so by: posts_with_counts = Post.all(:select => 'posts.*, count(comments.id) as comment_count', :joins => 'left outer join comments on comments.post_id = posts.id', :group_by => 'posts.id') #i believe #But it seems impossible to do so while linking these post objects to each #user as well, without running User.all() and then zippering the objects into #some other collection (ugly) #OR running posts.group_by(&:user) (even uglier, with the n user queries)

    Read the article

  • How to deploy RSWebParts.cab manually?

    - by denni
    I'm using the SSRS 2005 Web parts to display my reports in a MOSS 2007 SP1 Portal. I have successfully installed the Web parts in my development, testing, and UAT servers using the following command: stsadm -o addwppack -filename path/to/RSWebParts.cab. But when I tried running the same command in the production server, it will give me the following error: This solution contains no resources scoped for a Web application and cannot be deployed to a particular Web application. I know I usually will get this kind of error message when I tried to deploy my custom solutions having no Web application resources (such as web.config entries) to a specific Web application. But this is not my custom solution, it is an out-of-the-box SSRS Web part and it does have resources scoped to a Web application. I tried to even use different combination of the command by providing the -url, -globalinstall, and -force switches but it still give the same error. The configuration of the 4 servers are exactly the same, both from software and hardware perspectives. All other features are working properly on the production server. I even tried to extract the cab file manually to the bin folder of my Web application, then modify the Web.config manually to include the SafeControl element (copied from the manifest.xml inside the cab file). But it gave me an error saying it couldn't find the resources file. Even though, I extracted the whole file, including the resource files in the bin folder. Is there anyone who can help me resolve the problem? Thanks a lot.

    Read the article

  • Merging: hg/git vs. svn

    - by stmax
    I often read that hg (and git and...) are better at merging than svn but I have never seen practical examples of where hg/git can merge something where svn fails (or where svn needs manual intervention). Could you post a few step-by-step lists of branch/modify/commit/...-operations that show where svn would fail while hg/git happily moves on? Practical, not highly exceptional cases please... Some background: we have a few dozen developers working on projects using svn, with each project (or group of similar projects) in its own repo. We know how to apply release- and feature-branches so we don't run into problems very often (i.e. we've been there, but we've learned to overcome joel's problems of "one programmer causing trauma to the whole team" or "needing six developers for two weeks to reintegrate a branch"). We have release-branches that are very stable and only used to apply bugfixes. We have trunks that should be stable enough to be able to create a release within one week. And we have feature-branches that single developers or groups of developers can work on. Yes, they are deleted after reintegration so they don't clutter up the repository. ;) So I'm still trying to find the advantages of hg/git over svn. I'd love to get some hands-on experience, but there aren't any bigger projects we could move to hg/git yet, so I'm stuck with playing with small artifical projects that only contain a few made up files. And I'm looking for a few cases where you can feel the impressive power of hg/git, since so far I have often read about them but failed to find them myself.

    Read the article

  • How to avoid loading a LINQ to SQL object twice when editting it on a website.

    - by emzero
    Hi guys I know you are all tired of this Linq-to-Sql questions, but I'm barely starting to use it (never used an ORM before) and I've already find some "ugly" things. I'm pretty used to ASP.NET Webforms old school developing, but I want to leave that behind and learn the new stuff (I've just started to read a ASP.NET MVC book and a .NET 3.5/4.0 one). So here's is one thing I didn't like and I couldn't find a good alternative to it. In most examples of editing a LINQ object I've seen the object is loaded (hitting the db) at first to fill the current values on the form page. Then, the user modify some fields and when the "Save" button is clicked, the object is loaded for second time and then updated. Here's a simplified example of ScottGu NerdDinner site. // // GET: /Dinners/Edit/5 [Authorize] public ActionResult Edit(int id) { Dinner dinner = dinnerRepository.GetDinner(id); return View(new DinnerFormViewModel(dinner)); } // // POST: /Dinners/Edit/5 [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Edit(int id, FormCollection collection) { Dinner dinner = dinnerRepository.GetDinner(id); UpdateModel(dinner); dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } As you can see the dinner object is loaded two times for every modification. Unless I'm missing something about LINQ to SQL caching the last queried objects or something like that I don't like getting it twice when it should be retrieved only one time, modified and then comitted back to the database. So again, am I really missing something? Or is it really hitting the database twice (in the example above it won't harm, but there could be cases that getting an object or set of objects could be heavy stuff). If so, what alternative do you think is the best to avoid double-loading the object? Thank you so much, Greetings!

    Read the article

  • Why an auto_ptr can "seal" a container

    - by icephere
    auto_ptr on wikipedia said that "an auto_ptr containing an STL container may be used to prevent further modification of the container.". It used the following example: auto_ptr<vector<ContainedType> > open_vec(new vector<ContainedType>); open_vec->push_back(5); open_vec->push_back(3); // Transfers control, but now the vector cannot be changed: auto_ptr<const vector<ContainedType> > closed_vec(open_vec); // closed_vec->push_back(8); // Can no longer modify If I uncomment the last line, g++ will report an error as t05.cpp:24: error: passing ‘const std::vector<int, std::allocator<int> >’ as ‘this’ argument of ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = int, _Alloc = std::allocator<int>]’ discards qualifiers I am curious why after transferring the ownership of this vector, it can no longer be modified? Thanks a lot!

    Read the article

  • Why does output of fltk-config truncate arguments to gcc?

    - by James Morris
    I'm trying to build an application I've downloaded which uses the SCONS "make replacement" and the Fast Light Tool Kit Gui. The SConstruct code to detect the presence of fltk is: guienv = Environment(CPPFLAGS = '') guiconf = Configure(guienv) if not guiconf.CheckLibWithHeader('lo', 'lo/lo.h','c'): print 'Did not find liblo for OSC, exiting!' Exit(1) if not guiconf.CheckLibWithHeader('fltk', 'FL/Fl.H','c++'): print 'Did not find FLTK for the gui, exiting!' Exit(1) Unfortunately, on my (Gentoo Linux) system, and many others (Linux distributions) this can be quite troublesome if the package manager allows the simultaneous install of FLTK-1 and FLTK-2. I have attempted to modify the SConstruct file to use fltk-config --cflags and fltk-config --ldflags (or fltk-config --libs might be better than ldflags) by adding them like so: guienv.Append(CPPPATH = os.popen('fltk-config --cflags').read()) guienv.Append(LIBPATH = os.popen('fltk-config --ldflags').read()) But this causes the test for liblo to fail! Looking in config.log shows how it failed: scons: Configure: Checking for C library lo... gcc -o .sconf_temp/conftest_4.o -c "-I/usr/include/fltk-1.1 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT" gcc: no input files scons: Configure: no How should this really be done? And to complete my answer, how do I remove the quotes from the result of os.popen( 'command').read()? EDIT The real question here is why does appending the output of fltk-config cause gcc to not receive the filename argument it is supposed to compile?

    Read the article

  • SmartGwt DateItem useTextField=true - how to make text entry field UNeditable

    - by Paul
    Since I can't figure out how to solve my problem presented here I'm thinking for the moment at a temporary solution. I have a smartgwt DateItem widget: DateItem date = new DateItem("Adate"); date.setWidth(120); date.setWrapTitle(false); date.setAttribute("useTextField", true); date.setAttribute("inputFormat", "yyyy/MM/dd"); date.setAttribute("displayFormat", "toJapanShortDate"); Because the attribute useTextField is set to true we can see the text entry field. How can I make this text entry field to be uneditable. Actually I want to have only the possibility to choose the date from calendar and not to change it manually. Resolved - the issue exposed above - thanks to @RAS user. TextItem textItem = new TextItem(); textItem.setAttribute("readOnly", true); date.setAttribute("textFieldProperties", textItem); Related link But I have now another issue: The date chooser won't show the date on the text field but Today's date. For example, enter 30/05/2009 on the text field, go to another field, then come back on click on the date chooser and the selected day will be Today's date instead on June 30th, 2009. Which is the reason for this? Can this be solved? Also let's say I let to the user to opportunity to manually modify the date - can I put some validators on it? Thank you.

    Read the article

  • ASP.NET Event delegation between user controls

    - by Ishan
    Give the following control hierarchy on a ASP.NET page: Page HeaderControl       (User Control) TitleControl       (Server Control) TabsControl       (User Control) other controls I'm trying to raise an event (or some notification) in the TitleControl that bubbles to the Page level. Then, I'd like to (optionally) register an event handler at the Page codebehind that will take the EventArgs and modify the TabsControl in the example above. The important thing to note is that this design will allow me to drop these controls into any Page and make the entire system work seamlessly if the event handler is wired up. The solution should not involve a call to FindControl() since that becomes a strong association. If no handler is defined in the containing Page, the event is still raised by TitleControl but is not handled. My basic goal is to use event-based programming so that I can decouple the user controls from each other. The event from TitleControl is only raised in some instances, and this seemed to be (in my head) the preferred approach. However, I can't seem to find a way to cleanly achieve this. Here are my (poor) attempts: Using HttpContext.Current.Items Add the EventArgs to the Items collection on TitleControl and pick it up on the TabsControl. This works but it's fundamentally hard to decipher since the connection between the two controls is not obvious. Using Reflection Instead of passing events, look for a function on the container Page directly within TitleControl as in: Page.GetType().GetMethod("TabControlHandler").Invoke(Page, EventArgs); This will work, but the method name will have to be a constant that all Page instances will have to defined verbatim. I'm sure that I'm over-thinking this and there must be a prettier solution using delegation, but I can't seem to think of it. Any thoughts?

    Read the article

  • ildasm and dynamic exe files

    - by TonyNeallon
    Hi There, I am trying to create an application can modify properties in IL to create a slightly different executable. E.g Client A runs app and a label on the WinForm label Reads "Client A:". Client B runs the app and Label Says "Client B". Easy I know using config files or resource files but thats not an option for this project. The Main program needs to be able to generate .exe file dynamically based on some form fields entered by user. My solution was to create a standalone executable that contained all the elements which I needed to make dynamic. I then used ildasm to generate the IL and thought that I could use this IL and substitute tags for the elements i wanted to make dynamic. I could then replace those tags at runtime after user filled the form using regex etc. The problem is, the if i re save the IL file generated by ILDASM as an exe and try to run it. I just launches console and does nothing. Am I going about this the wrong way? I didnt want to delve into Reflection as the dynamic .exe is a really simple one and I thought reverse engineering IL with ildasm would be the quickest way. You thoughts and pointers are much appreciated. Tony

    Read the article

  • How to build Android for Samsung Galaxy Note

    - by Tr?n Ð?i
    I'd like to modify and build my own Android for my Samsung Galaxy Note I've downloaded Android 4.1.2 from http://source.android.com and Samsung open source for my Samsung Galaxy Note. After extract Samsung open source, I get 2 folders: Kernel and Platform, and 2 README text file README_Kernel.txt 1. How to Build - get Toolchain From android git server , codesourcery and etc .. - arm-eabi-4.6 - edit build_kernel.sh edit "CROSS_COMPILE" to right toolchain path(You downloaded). EX) CROSS_COMPILE= $(android platform directory you download)/android/prebuilts/gcc/linux-x86/arm/arm-eabi-4.6/bin/arm-eabi- Ex) CROSS_COMPILE=/usr/local/toolchain/arm-eabi-4.6/bin/arm-eabi- // check the location of toolchain - execute Kernel script $ ./build_kernel.sh 2. Output files - Kernel : arch/arm/boot/zImage - module : drivers/*/*.ko 3. How to Clean $ make clean README_Platform.txt [Step to build] 1. Get android open source. : version info - Android 4.1 ( Download site : http://source.android.com ) 2. Copy module that you want to build - to original android open source If same module exist in android open source, you should replace it. (no overwrite) # It is possible to build all modules at once. 3. You should add module name to 'PRODUCT_PACKAGES' in 'build\target\product\core.mk' as following case. case 1) bluetooth : should add 'audio.a2dp.default' to PRODUCT_PACKAGES case 2) e2fsprog : should add 'e2fsck' to PRODUCT_PACKAGES case 3) libexifa : should add 'libexifa' to PRODUCT_PACKAGES case 4) libjpega : should add 'libjpega' to PRODUCT_PACKAGES case 5) KeyUtils : should add 'libkeyutils' to PRODUCT_PACKAGES case 6) bluetoothtest\bcm_dut : should add 'bcm_dut' to PRODUCT_PACKAGES ex.) [build\target\product\core.mk] - add all module name for case 1 ~ 6 at once PRODUCT_PACKAGES += \ e2fsck \ libexifa \ libjpega \ libkeyutils \ bcm_dut \ audio.a2dp.default 4. In case of 'bluetooth', you should add following text in 'build\target\board\generic\BoardConfig.mk' BOARD_HAVE_BLUETOOTH := true BOARD_HAVE_BLUETOOTH_BCM := true 5. excute build command ./build.sh user What I need to do after followed 2 above files

    Read the article

  • Send parameters to a web service.

    - by Alejandra Meraz
    Before I start: I'm programming for Iphone, using objective C. I have already implemented a call to a web service function using NSURLRequest and NSURLConnection. The function then returns a XML with the info I need. The code is as follows: NSURL *url = [NSURL URLWithString:@"http://myWebService/function"]; NSMutableURLRequest theRequest = [[NSMutableURLRequest alloc] initWithURL:url]; NSURLConnection theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; i also implemented the methods didRecieveResponse didRecieveAuthenticationChallenge didRecievedData didFailWithError connectionDidFinishLoading. And it works perfectly. Now I need to send 2 parameters to the function: "location" and "module". I tried using the following modification: NSMutableURLRequest theRequest = [[NSMutableURLRequest alloc] initWithURL:url]; [theRequest setValue:@"USA" forHTTPHeaderField:@"location"]; [theRequest setValue:@"DEVELOPMENT" forHTTPHeaderField:@"module"]; NSURLConnection theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; But it doesn't seem to work. I'm doing something wrong? is there a way to know if I'm using the wrong names for the parameters (as maybe it is "Location" or "LOCATION" or it doesn't matter?)? or a way to know which parameters is the function waiting for... Extra info: I don't have access to the source of the web service so I can't modify it. But I can access the WSDL. The person who made the function say is all there... but I can't make any sense of it .< Any help would be appreciated. :)

    Read the article

  • LINQ Query please help C#.Net.

    - by Paul Matthews
    I'm very new to LINQ and struggling to find the answers. I have a simple SQL query. Select ID, COUNT(ID) as Selections, OptionName, SUM(Units) as Units FROM tbl_Results GROUP BY ID, OptionName. The results I got were: '1' '4' 'Approved' '40' '2' '1' 'Rejected' '19' '3' '2' 'Not Decided' '12' Due to having to encrypt all my data in the database I'm unable to do sums. Therefore I have now brought back the data and decrypt it in the application layer. Results would be: '1' 'Approved' '10' '3' 'Not Deceided' '6' '2' 'Rejected' '19' '1' 'Approved' '15' '1' 'Approved' '5' '3' 'Not Deceided' '6' '1' 'Approved' '10' using a simple class I have called back the above results, and put them in a list class. public class results { public int ID {get;set;} public string OptionName {get;set;} public int Unit {get;set;} } I almost have the LINQ query to bring back the results like the SQL query at the top var q = from r in Results group p.Unit by p.ID int g select new {ID = g.Key, Selections = g.Count(), Units = g.Sum()}; How do I ensure my LINQ query also give me the Option Name? Also if I created a class called Statistics to hold my results how would I modify the LINQ query to give me list result set? public class results { public int ID {get;set;} public int NumberOfSelections { get; set; } public string OptionName {get;set;} public int UnitTotal {get;set;} }

    Read the article

  • DataTable vs. Collection in .Net

    - by B Pete
    I am writing a program that needs to read a set of records that describe the register map of a device I need to communicate with. Each record will have a handfull of fields that describe the properties of each register. I don't really need to edit or modify the data in my VB or C# program, though I would like to be able to display the data on a grid. I would like to store the data in a CSV file, or perhaps an XML file. I need to enable users to edit the data off-line, preferably in excel. I am considering using a DataTable or a Collection of "Register" objects (which I would define). I prototyped a DataTable, and found I can read/write XML easily using the built in methods and I can easily bind to a DataGridView. I was not able to find a way to retreive info on a single register without using a query that returns a collection of rows, even though I defined a unique primaty key column. The syntax to get a value from a column is also complex, though I could be missing something on both counts. I'm tempted to use a collection of "Register" objects that I can access via a unique key. It would be a little more coding up front, but seems like a cleaner solution overall. I should still be able to use LINQ to dataset to query subsets of registers when I need them, but would also be able to grab a single field using a the key value, something like this: Registers(keyValue).fieldName). Which would be a cleaner approach to the problem? Is there a way to read/write XML into a Collection without needing custom code? Could this be accomplished using String for a key?

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Query gives an unsorted result set when run from stored procedure using CTE

    - by irtizaur
    I am trying to create a paging query using CTE. It works fine when I execute it from Microsoft SQL Server Management Studio Query Editor. And the result set is perfectly sorted as I want. But when I modify it for a stored procedure it gives me a unsorted result and I don't have any clue why. Here is my Query, with items as ( select ROW_NUMBER() over (order by create_time desc) number , i.item_name item_name , i.create_time create_time , c.category_name category_name , i.category_id category_id from cb_item i, cb_category c where i.category_id = c.category_id and c.category_id = '4E5248FE-05DD-4D01-ABBB-80C6E3BA5CDA' ) select item_name , create_time , category_name , category_id from items where number between 1 and 25 And this is the Stored Procedure Version, create procedure ItemPage @category_id uniqueidentifier , @from int , @to int , @sortby nvarchar(50) as begin with items as ( select ROW_NUMBER() over (order by @sortby) number , i.item_name item_name , i.create_time create_time , c.category_name category_name , i.category_id category_id from cb_item i, cb_category c where i.category_id = c.category_id and c.category_id = @category_id ) select item_name , create_time , category_name , category_id from items where number between @from and @to end exec itempage '4E5248FE-05DD-4D01-ABBB-80C6E3BA5CDA' , 1, 25, 'create_time desc' The first one gives me sorted result but procedure gives me unsorted result. I don't know why?

    Read the article

  • Setting serial RS232 port settings; any in C# alternatives to SerialPort class ?

    - by adrin
    In my .NET application I need to achieve serial port setup equivalent to this C++ managed code: ::SetCommMask(m_hCOMM, EV_RXCHAR); ::SetupComm(m_hCOMM, 9*2*128*10, 400); ::PurgeComm(m_hCOMM, PURGE_TXABORT|PURGE_RXABORT|PURGE_TXCLEAR|PURGE_RXCLEAR); COMMTIMEOUTS timeOut; timeOut.ReadIntervalTimeout = 3; timeOut.ReadTotalTimeoutConstant = 3; timeOut.ReadTotalTimeoutMultiplier = 1; timeOut.WriteTotalTimeoutConstant = 0; timeOut.WriteTotalTimeoutMultiplier= 0; int nRet= ::SetCommTimeouts(m_hCOMM, &timeOut); ::EscapeCommFunction(m_hCOMM, SETDTR); ::EscapeCommFunction(m_hCOMM, SETRTS); DCB dcb; memset(&dcb, 0, sizeof(DCB)); dcb.BaudRate= m_nSpeed; dcb.ByteSize= 8; dcb.fParity = FALSE; dcb.Parity = NOPARITY; dcb.StopBits= ONESTOPBIT; dcb.fBinary = TRUE; dcb.fDsrSensitivity= FALSE; dcb.fOutxDsrFlow= FALSE; dcb.fOutxCtsFlow= FALSE; dcb.fDtrControl = DTR_CONTROL_HANDSHAKE; dcb.fRtsControl = RTS_CONTROL_TOGGLE; nRet= ::SetCommState(m_hCOMM, &dcb); Is it possible at all? How do I approach this problem? Are there any (preferable free) libraries that allow such low level serial port control or should I create my own wrapper on top of Win32 api? Anyone did anything similar or has an idea how to 'glue' win32 serial port api with .net so that I can use neat .NET DataReceived() events ? Or maybe I can create .NET SerialPort instance and then modify it using managed API?

    Read the article

  • Dependency Injection and decoupling of software layers

    - by cs31415
    I am trying to implement Dependency Injection to make my app tester friendly. I have a rather basic doubt. Data layer uses SqlConnection object to connect to a SQL server database. SqlConnection object is a dependency for data access layer. In accordance with the laws of dependency injection, we must not new() dependent objects, but rather accept them through constructor arguments. Not wanting to upset the DI gods, I dutifully create a constructor in my DAL that takes in SqlConnection. Business layer calls DAL. Business layer must therefore, pass in SqlConnection. Presentation layer calls Business layer. Hence it too, must pass in SqlConnection to business layer. This is great for class isolation and testability. But didn't we just couple the UI and Business layers to a specific implementation of the data layer which happens to use a relational database? Why do the Presentation and Business layers need to know that the underlying data store is SQL? What if the app needs to support multiple data sources other than SQL server (such as XML files, Comma delimited files etc.) Furthermore, what if I add another object upon which my data layer is dependent on (say, a second database). Now, I have to modify the upper layers to pass in this new object. How can I avoid this merry-go-round and reap all the benefits of DI without the pain?

    Read the article

  • Pinax TemplateSyntaxError

    - by Spikie
    hi, i ran into this errors while trying to modify pinax database model i am using eclipse pydev i have this error on the pydev Exception Type: TemplateSyntaxError at / Exception Value: Caught an exception while rendering: (1146, "Table 'test1.announcements_announcement' doesn't exist") please how do i correct this UPDATE: i asked this question and left unresolved some months back and you what ran into the bug again this week and typed the error message in google hit the page with the question and unanswered so i think i have to answer it and hope it help someone in the future have the same problem. some the problem is that the sqlite path is out of place so django or this case pinax can not find it so to resolve that change the absolute path to sqlite like it DATABASE_ENGINE = 'sqlite3' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'ado_mssql'. DATABASE_NAME = os.path.join(PROJECT_ROOT,'dev.db' ) # Or path to database file if using sqlite3. DATABASE_USER = '' # Not used with sqlite3. DATABASE_PASSWORD = '' # Not used with sqlite3. DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3. i hope that help

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Is there a way to get an ASMX Web Service created in VS 2005 to receive and return JSON?

    - by Ben McCormack
    I'm using .NET 2.0 and Visual Studio 2005 to try to create a web service that can be consumed both as SOAP/XML and JSON. I read Dave Ward's Answer to the question How to return JSON from a 2.0 asmx web service (in addition to reading other articles at Encosia.com), but I can't figure out how I need to set up the code of my asmx file in order to work with JSON using jQuery. Two Questions: How do I enable JSON in my .NET 2.0 ASMX file? What's a simple jQuery call that could consume the service using JSON? Also, I notice that since I'm using .NET 2.0, I i'm not able to implement using System.Web.Script.Services.ScriptService. Here's my C# code for the demo ASMX service: using System; using System.Web; using System.Collections; using System.Web.Services; using System.Web.Services.Protocols; /// <summary> /// Summary description for StockQuote /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class StockQuote : System.Web.Services.WebService { public StockQuote () { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public decimal GetStockQuote(string ticker) { //perform database lookup here return 8; } [WebMethod] public string HelloWorld() { return "Hello World"; } } Here's a snippet of jQuery I found on the internet and tried to modify: $(document).ready(function(){ $("#btnSubmit").click(function(event){ $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "http://bmccorm-xp/WebServices/HelloWorld.asmx", data: "", dataType: "json" }) event.preventDefault(); }); });

    Read the article

  • PHP incomplete code - scan dir, include only if name starts or end with x

    - by Adrian M.
    I posted a question before but I am yet limited to mix the code without getting errors.. I'm rather new to php :( ( the dirs are named in series like this "id_1_1" , "id_1_2", "id_1_3" and "id_2_1" , "id_2_2", "id_2_3" etc.) I have this code, that will scan a directory for all the files and then include a same known named file for each of the existing folders.. the problem is I want to modify a bit the code to only include certain directories which their names: ends with "_1" starts with "id_1_" I want to create a page that will load only the dirs that ends with "_1" and another file that will load only dirs that starts with "id_1_".. <?php include_once "$root/content/common/header.php"; include_once "$root/content/common/header_bc.php"; include_once "$root/content/" . $page_file . "/content.php"; $page_path = ("$root/content/" . $page_file); $includes = array(); $iterator = new RecursiveIteratorIterator( new RecursiveDirectoryIterator($page_path), RecursiveIteratorIterator::SELF_FIRST); foreach($iterator as $file) { if($file->isDir()) { $includes[] = strtoupper($file . '/template.php'); } } $includes = array_reverse($includes); foreach($includes as $file){ include $file; } include_once "$root/content/common/footer.php"; ?> Many Thanks!

    Read the article

  • Audio playback, creating nested loop for fade in/out.

    - by Dave Slevin
    Hi Folks, First time poster here. A quick question about setting up a loop here. I want to set up a for loop for the first 1/3 of the main loop that will increase a value from .00001 or similar to 1. So I can use it to multiply a sample variable so as to create a fade-in in this simple audio file playback routine. So far it's turning out to be a bit of a head scratcher, any help greatfully recieved. for(i=0; i < end && !feof(fpin); i+=blockframes) { samples = fread(audioblock, sizeof(short), blocksamples, fpin); frames = samples; for(j=0; j < frames; j++) { for (f = 0; f< frames/3 ;f++) { fade = fade--; } output[j] = audioblock[j]/fade; } fwrite(output,sizeof(short), frames, fpoutput); } Apologies, So far I've read and re-write the file successfully. My problem is I'm trying to figure out a way to loop the variable 'fade' so it either increases or decreases to 1, so as I can modify the output variable. I wanted to do this in say 3 stages: 1. From 0 to frames/3 to increace a multiplication factor from .0001 to 1 2. from frames 1/3 to frames 2/3 to do nothing (multiply by 1) and 3. For the factor to decrease again below 1 so as for the output variable to decrease back to the original point. How can I create a loop that will increase and decrease these values over the outside loop?

    Read the article

  • How can I stop SQL Server Management Studio replacing 'SELECT *' with the column list ?

    - by Ben McIntyre
    SQL Server Mgmt Studio is driving me crazy. If I create a view and SELECT '*' from a table, it's all OK and I can save the view. Looking at the SQL for the view (eg.by scripting a CREATE) reveals that the 'SELECT *' really is saved to the view's SQL. But as soon as I reopen the view using the GUI (right click modify), SELECT * is replaced with a column list of all the columns in the table. How can I stop Management Studio from doing this ? I want my 'SELECT *' to remain just that. Perhaps it's just the difficulty of googling 'SELECT *' that prevented me from finding anything remotely relevant to this (i did put it in double quotes). Please, I am highly experienced in Transact-SQL, so please DON'T give me a lecture on why I shouldn't be using SELECT *. I know all the pros and cons and I do use it at times. It's a language feature, and like all language features can be used for good or evil (I emphatically do NOT agree that it is never appropriate to use it). Edit: I'm giving Marc the answer, since it seems it is not possible to turn this behaviour off. Problem is considered closed. I note that Enterprise Manager did no similar thing. The workaround is to either edit SQL as text, or go to a product other than Managment Studio. Or constantly edit out the column list and replace the * every time you edit a view. Sigh.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >