Search Results

Search found 33139 results on 1326 pages for 'embedded database'.

Page 767/1326 | < Previous Page | 763 764 765 766 767 768 769 770 771 772 773 774  | Next Page >

  • Google appEngine: 404 when accesing /_ah/api

    - by jfu
    I try to build a very simple GAE application, using eclipse and the Google Plugin for Eclipse. I've generated some Endpoint from an @Entity class, then I've generated Cloud Endpoint Client library. After that I've started the appEngine project (within eclipse, on the embedded jetty server). When I try to access /_ah/api I get the following issue: HTTP ERROR 500 Problem accessing /_ah/api/. Reason: Failed to retrieve API configs with status: 404 Caused by: java.io.IOException: Failed to retrieve API configs with status: 404 at com.google.api.server.spi.tools.devserver.ApiServlet.getApiConfigSources(ApiServlet.java:102) at com.google.api.server.spi.tools.devserver.ApiServlet.initConfigsIfNecessary(ApiServlet.java:67) at com.google.api.server.spi.tools.devserver.RestApiServlet.service(RestApiServlet.java:117) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.google.appengine.api.socket.dev.DevSocketFilter.doFilter(DevSocketFilter.java:74) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.ResponseRewriterFilter.doFilter(ResponseRewriterFilter.java:123) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.HeaderVerificationFilter.doFilter(HeaderVerificationFilter.java:34) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.api.blobstore.dev.ServeBlobFilter.doFilter(ServeBlobFilter.java:63) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:43) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.StaticFileFilter.doFilter(StaticFileFilter.java:125) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at com.google.appengine.tools.development.DevAppServerModulesFilter.doDirectRequest(DevAppServerModulesFilter.java:368) at com.google.appengine.tools.development.DevAppServerModulesFilter.doDirectModuleRequest(DevAppServerModulesFilter.java:351) at com.google.appengine.tools.development.DevAppServerModulesFilter.doFilter(DevAppServerModulesFilter.java:116) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) What am I doing wrong?

    Read the article

  • how to handle empty selection in a jface bound combobox?

    - by guido
    I am developing a search dialog in my eclipse-rcp application. In the search dialog I have a combobox as follows: comboImp = new CCombo(grpColSpet, SWT.BORDER | SWT.READ_ONLY); comboImp.setBounds(556, 46, 184, 27); comboImpViewer = new ComboViewer(comboImp); comboImpViewer.setContentProvider(new ArrayContentProvider()); comboImpViewer.setInput(ImpContentProvider.getInstance().getImps()); comboImpViewer.setLabelProvider(new LabelProvider() { @Override public String getText(Object element) { return ((Imp)element).getImpName(); } }); Imp is a database entity, ManyToOne to the main entity which is searched, and ImpContentProvider is the model class which speaks to embedded sqlite database via jpa/hibernate. This combobox is supposed to contain all instances of Imp, but to also let empty selection; it's value is bound to a service bean as follows: IObservableValue comboImpSelectionObserveWidget = ViewersObservables.observeSingleSelection(comboImpViewer); IObservableValue filterByImpObserveValue = BeansObservables.observeValue(searchPrep, "imp"); bindingContext.bindValue(comboImpSelectionObserveWidget, filterByImpObserveValue , null, null); As soon as the user clicks on the combo, a selection (first element) is made: I can see the call to a selectionlistener i added on the viewer. My question is: after a selection has been made, how do I let the user change his mind and have an empty selection in the combobox? should I add a "fake" empty instance of Imp to the List returned by the ImpContentProvider? or should I implement an alternative to ArrayContentProvider? and one additional related question is: why calling deselectAll() and clearSelection() on the combo does NOT set a null value to the bound bean?

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • Running Cargo From Maven antrun Plugin

    - by Frank
    I have a maven (multi-module) project creating some WAR and EAR files for JBoss AS 7.1.x. For one purpose, I need to deploy one generated EAR file of one module to a fresh JBoss instance and run it, call some REST web service calls against it and stop JBoss. Then I need to package the results of these calls that were written to the database. Currently, I am trying to use CARGO and the maven ant run plugin to perform this task. Unfortunately, I cannot get the three (maven, ant run and CARGO) to play together. I don't have the uberjar that is used in the ant examples of cargo. How can I configure the ant run task so that the cargo ant task can create, start, deploy my JBoss? Ideally the one unpacked and configured by the cargo-maven2-plugin in another phase? Or, is there a better way to achieve my goal of creating the database? I cannot really use the integration-test phase, as it is executed after the package phase. So, I plan to do it all in compile phase using antrun.

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • App crashes frequently at time when UIActionSheet would be presented

    - by Jim Hankins
    I am getting the following error intermittently when a call is made to display an action sheet. Assertion failure in -[UIActionSheet showInView:] Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: view != nil' Now in this case I've not changed screens. The UIActionSheet is presented when a local notification is fired and I have an observer call a local method on this view as such: I have the property marked as strong. When the action sheet is dismissed I also set it to nil. I am using a story board for the UI. It's fairly repeatable to crash it, perhaps less than 5 tries. (Thankfully I have that going for me). Any suggestions what to try next? I'm really pulling my hair out on this one. Most of the issues I've seen on this topic are pointing to the crash occurring once the selection is made. In my case it's at presentation and intermittently. Also for what it's worth, this particular view is several stacks deep in an embedded navigation controller. Hometableviewdetail selectviewController in question. This same issue occurs so far in testing on iOS 5.1 and iOS 6. I'm presuming it's something to do with how the show InView is being targeted. self.actionSheet = [[UIActionSheet alloc] initWithTitle:@"Select Choice" delegate:self cancelButtonTitle:@"Not Yet" destructiveButtonTitle:@"Do this Now" otherButtonTitles:nil]; [self.actionSheet showInView:self.parentViewController.tabBarController.view];

    Read the article

  • Not getting key value from Identity column back after inserting new row with SubSonic ActiveRecord

    - by mikedevenney
    I'm sure I'm missing the obvious answer here, but could use a hand. I'm new to SubSonic and using version 3. I've got myself to the point of being able to query and insert, but I'm stuck with how I would get the value of the identity column back after my insert. I saw another post that mentioned Linq Templates. I'm not using those (at least I don't think I am...?) TIA ... UPDATE ... So I've been debugging through my code watching how the SubSonic code works and I found where the indentity column is being ignored. I use int as the datatype for my ID columns in the database and set them as identity. Since int is a non-nullable data type in c# the logical test in the Add method (public void Add(IDataProvider provider)) that checks if there is a value in the key column by doing a (key==null) could be the issue. The code that gets the new value for the identity field is in the 'true path', since an int can't be null and I use ints as my identity column data types this test will never pass. The ID field for my object has a 0 in it that I didn't put there. I assume it's set during the initialization of the object. Am I off base here? Is the answer to change my data types in the database? Another question (more a curiosity). I noticed that some of the properties in the generated classes are declared with a ? after the datatype. I'm not familiar with this declaration construct... what gives? There are some declared as an int (non key fields) and others that are declared as int? (key fields). Does this have something to do with how they're treated at initialization? Any help is appreciated! --BUMP--

    Read the article

  • Random problem connecting to MySQL

    - by CharlesLeaf
    Environment: RHEL 5 servers, MySQL 5.1.43, PHP 5.1.6 (using MySQLi). Currently only available within our internal VPN network. Servers ServerA: Webserver ServerB/C/D: Database server (1 master 2 slaves) The error (on ServerA) [Tue May 25 11:12:17 2010] [error] [client CLIENTIP] PHP Warning: mysqli::real_connect() [function.mysqli-real-connect]: (HY000/2003): Can't connect to MySQL server on 'ServerB' (4) in /home/**/Database.php on line 67, referer: [website] Problem description It appears that at completely random times, our website is unable to connect to one of the MySQL servers - usually the Master. Except for the forementioned error message, there is nothing to be found in any of the logs as far as I can see, and most of the times the connection is succesful and everything works as it should. It's just at completely random times, this error pops up. There's no firewall blocking any internal traffic, timeout value is 3 but it doesn't take 3 seconds before it fails to connect. With the default mysql client I can connect from ServerA to ServerB,C and D and haven't encountered a problem yet. Does anyone have a clue what I might be overlooking / could be the problem? Because I've run out of ideas myself.

    Read the article

  • Jquery Ajax + PHP

    - by Kris.Mitchell
    I am having problems with jQuery Ajax and PHP I have my php file set up to echo the data I am gathering from a mysql database. I have verified that the database is returning something and that the string at the end of the function actually contains data. What is happening though, is that it looks like the php echo is happening before the ajax call, causing the php data to be displayed at the top of the page, and not below in proper div. I think it might have something to do with timing of the ajax and the php call, but I am not sure. So, why is the data not getting caught by the .ajax and thrown into the div? Thanks for the help! jQuery $(document).ready(function() { $.ajax({ url: "../database_functions.php", type: "GET", data: "cat=jw&sub=pi&sort=no", cache: false, success: function (html) { alert("Success!"); $('#product-list').html(html); } }); }); PHP echo "Hello World";

    Read the article

  • How can I introduce a regex action to match the first element in a Catalyst URI ?

    - by RET
    Background: I'm using a CRUD framework in Catalyst that auto-generates forms and lists for all tables in a given database. For example: /admin/list/person or /admin/add/person or /admin/edit/person/3 all dynamically generate pages or forms as appropriate for the table 'person'. (In other words, Admin.pm has actions edit, list, add, delete and so on that expect a table argument and possibly a row-identifying argument.) Question: In the particular application I'm building, the database will be used by multiple customers, so I want to introduce a URI scheme where the first element is the customer's identifier, followed by the administrative action/table etc: /cust1/admin/list/person /cust2/admin/add/person /cust2/admin/edit/person/3 This is for "branding" purposes, and also to ensure that bookmarks or URLs passed from one user to another do the expected thing. But I'm having a lot of trouble getting this to work. I would prefer not to have to modify the subs in the existing framework, so I've been trying variations on the following: sub customer : Regex('^(\w+)/(admin)$') { my ($self, $c, @args) = @_; #validation of captured arg snipped.. my $path = join('/', 'admin', @args); $c->request->path($path); $c->dispatcher->prepare_action($c); $c->forward($c->action, $c->req->args); } But it just will not behave. I've used regex matching actions many times, but putting one in the very first 'barrel' of a URI seems unusually traumatic. Any suggestions gratefully received.

    Read the article

  • Passing session between jsf backing bean and model

    - by Rachel
    Background : I am having backing bean which has upload method that listen when file is uploaded. Now I pass this file to parser and in parser am doing validation check for row present in csv file. If validation fails, I have to log information and saving in logging table in database. My end goal : Is to get session information in logging bean so that I can get initialContext and make call to ejb to save data to database. What is happening : In my upload backing bean, am getting session but when i call parser, I do not pass session information as I do not want parser to be dependent on session as I want to unit test parser individually. So in my parser, I do not have session information, from parser am making call to logging bean(just a bean with some ejb methods) but in this logging bean, i need session because i need to get initial context. Question Is there a way in JSF, that I can get the session in my logging bean that I have in my upload backing bean? I tried doing: FacesContext ctx = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) ctx.getExternalContext().getSession(false); but session value was null, more generic question would be : How can I get session information in model bean or other beans that are referenced from backing beans in which we have session? Do we have generic method in jsf using which we can access session information throughout JSF Application?

    Read the article

  • FileConnection Blackberry memory usage

    - by Dean
    Hello, I'm writing a blackberry application that reads ints and strings out of a database. This is my first time dealing with reading/writing on the blackberry, so forgive me if this is a dumb question. The database file I'm reading is only about 4kB I open the file with the following code fconn = (FileConnection) Connector.open("file_path_here", Connector.READ); if(fconn.exists()==false){fconn.close();return;} is = fconn.openDataInputStream(); while(!eof){ //etc... } is.close(); fconn.close(); The problem is, this code appears to be eating a LOT of memory. Using breakpoints and the "Memory Statistics" view, I determined the following: calling "Connector.open" creates 71 objects and changes "RAM Bytes in use" by 5376 calling "fconn.openDataInputStream();" increases RAM usage by a whopping 75920 Is this normal? Or am I doing something wrong? And how can I fix this? 75MB of RAM is a LOT of memory to waste on a handheld device, especially when the file I'm reading is only 4kB and I haven't even begun reading any data! How is this even possible?

    Read the article

  • Is excessive DataTable usage bad?

    - by Justin R.
    I was recently asked to assist another team in building an ASP .NET website. They already have a significant amount of code written -- I was specifically asked build a few individual pages for the site. While exploring the code for the rest of the site, the amount of DataTables being constructed jumped out at me. Being a relatively new in the field, I've never worked on an application that utilizes a database as much as this site does, so I'm not sure how common this is. It seems that whenever data is queried from our database, the results are stored in a DataTable. This DataTable is then usually passed around by itself, or it's passed to a constructor. Classes that are initialized with a DataTable always assign the DataTable to a private/protected field, however only a few of these classes implement IDisposable. In fact, in the thousands of lines of code that I've browsed so far, I have yet to see the Dispose method called on a DataTable. If anything, this doesn't seem to be good OOP. Is this something that I should worry about? Or am I just paying more attention to detail than I should? Assuming you're most experienced developers than I am, how would you feel or react if someone who was just assigned to help you with your site approached you about this "problem"?

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • C# Interface Method calls from a controller

    - by ArjaaAine
    I was just working on some application architecture and this may sound like a stupid question but please explain to me how the following works: Interface: public interface IMatterDAL { IEnumerable<Matter> GetMattersByCode(string input); IEnumerable<Matter> GetMattersBySearch(string input); } Class: public class MatterDAL : IMatterDAL { private readonly Database _db; public MatterDAL(Database db) { _db = db; LoadAll(); //Private Method } public virtual IEnumerable<Matter> GetMattersBySearch(string input) { //CODE return result; } public virtual IEnumerable<Matter> GetMattersByCode(string input) { //CODE return results; } Controller: public class MatterController : ApiController { private readonly IMatterDAL _publishedData; public MatterController(IMatterDAL publishedData) { _publishedData = publishedData; } [ValidateInput(false)] public JsonResult SearchByCode(string id) { var searchText = id; //better name for this var results = _publishedData.GetMattersBySearch(searchText).Select( matter => new { MatterCode = matter.Code, MatterName = matter.Name, matter.ClientCode, matter.ClientName }); return Json(results); } This works, when I call my controller method from jquery and step into it, the call to the _publishedData method, goes into the class MatterDAL. I want to know how does my controller know to go to the MatterDAL implementation of the Interface IMatterDAL. What if I have another class called MatterDAL2 which is based on the interface. How will my controller know then to call the right method? I am sorry if this is a stupid question, this is baffling me.

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • Entity framework self referencing loop detected

    - by Lyd0n
    I have a strange error. I'm experimenting with a .NET 4.5 Web API, Entity Framework and MS SQL Server. I've already created the database and set up the correct primary and foreign keys and relationships. I've created a .edmx model and imported two tables: Employee and Department. A department can have many employees and this relationship exists. I created a new controller called EmployeeController using the scaffolding options to create an API controller with read/write actions using Entity Framework. In the wizard, selected Employee as the model and the correct entity for the data context. The method that is created looks like this: // GET api/Employee public IEnumerable<Employee> GetEmployees() { var employees = db.Employees.Include(e => e.Department); return employees.AsEnumerable(); } When I call my API via /api/Employee, I get this error: ...The 'ObjectContent`1' type failed to serialize the response body for content type 'application/json; ...System.InvalidOperationException","StackTrace":null,"InnerException":{"Message":"An error has occurred.","ExceptionMessage":"Self referencing loop detected with type 'System.Data.Entity.DynamicProxies.Employee_5D80AD978BC68A1D8BD675852F94E8B550F4CB150ADB8649E8998B7F95422552'. Path '[0].Department.Employees'.","ExceptionType":"Newtonsoft.Json.JsonSerializationException","StackTrace":" ... Why is it self referencing [0].Department.Employees? That doesn't make a whole lot of sense. I would expect this to happen if I had circular referencing in my database but this is a very simple example. What could be going wrong?

    Read the article

  • R: building a simple command line plotting tool/Capturing window close events

    - by user275455
    I am trying to use R within a script that will act as a simple command line plot tool. I.e. user pipes in a csv file and they get a plot. I can get to R fine and get the plot to display through various temp file machinations, but I have hit a roadblock. I cannot figure out how to get R to keep running until the users closes the window. If I plot and exit, the plot disappears immediately. If I plot and use some kind of infinite loop, the user cannot close the plot; he must exit by using an interrupt which I don't like. I see there is a getGraphicsEvent function, but it claims that the device is not supported (X11). Anyway, it doesn't appear to actually support an onClose event, only onMouseDown. Any ideas on how to solve this? edit: Thanks to Dirk for the advice to check out the tk interface. Here is the test code that works: require(tcltk) library(tkrplot) ##function to display plot, called by tkrplot and embedded in a window plotIt<-function(){ plot(x=1:10, y=1:10) } ##create top level window tt<-tktoplevel() ##variable to wait on like a condition variable, to be set by event handler done <- tclVar(0) ##bind to the window destroy event, set done variable when destroyed tkbind(tt,"",function() tclvalue(done) <- 1) ##Have tkrplot embed the plot window, then realize it with tkgrid tkgrid(tkrplot(tt,plotIt)) ##wait until done is true tkwait.variable(done)

    Read the article

  • My treeview Data is Not changing.

    - by Vibin Jith
    Hai , I am trying to display the user permission in a treeview.For each user permissions are stored in the database as xml file. In the page, a Combo box used to list the users and a treeView used to bind the Permission xml. When the user get selected in the combo box , i took the xml from the database and connect with xmlDatasource and bind with the treeView. What happening is , First time the TreeView fill with the xml nodes and another time it will not work. For the first selection it's ok. Anothers selections are not effected by the treeview. The code is debugging. No problem. Can you just tell ,why the treeview datasource is not updating. I used this code .. Dim permissionRoot = From permissionNode In MyUser.UserPermissionXml.Root.Elements("menuNode") XmlTreeViewSource.Data = permissionRoot(0).ToString trvPermission.DataSource = XmlTreeViewSource trvPermission.DataBind() SetPermission(trvPermission.Nodes(0)) The markups <asp:TreeView ID="trvPermission" runat="server" ExpandDepth="2" ShowCheckBoxes="All" ShowLines="True" ForeColor="#005782" > <DataBindings> <asp:TreeNodeBinding DataMember="menuNode" TextField="title" ValueField="value" /> </DataBindings> </asp:TreeView>

    Read the article

  • GridView ObjectDataSource LINQ Paging and Sorting using multiple table query.

    - by user367426
    I am trying to create a pageing and sorting object data source that before execution returns all results, then sorts on these results before filtering and then using the take and skip methods with the aim of retrieving just a subset of results from the database (saving on database traffic). this is based on the following article: http://www.singingeels.com/Blogs/Nullable/2008/03/26/Dynamic_LINQ_OrderBy_using_String_Names.aspx Now I have managed to get this working even creating lambda expressions to reflect the sort expression returned from the grid even finding out the data type to sort for DateTime and Decimal. public static string GetReturnType<TInput>(string value) { var param = Expression.Parameter(typeof(TInput), "o"); Expression a = Expression.Property(param, "DisplayPriceType"); Expression b = Expression.Property(a, "Name"); Expression converted = Expression.Convert(Expression.Property(param, value), typeof(object)); Expression<Func<TInput, object>> mySortExpression = Expression.Lambda<Func<TInput, object>>(converted, param); UnaryExpression member = (UnaryExpression)mySortExpression.Body; return member.Operand.Type.FullName; } Now the problem I have is that many of the Queries return joined tables and I would like to sort on fields from the other tables. So when executing a query you can create a function that will assign the properties from other tables to properties created in the partial class. public static Account InitAccount(Account account) { account.CurrencyName = account.Currency.Name; account.PriceTypeName = account.DisplayPriceType.Name; return account; } So my question is, is there a way to assign the value from the joined table to the property of the current table partial class? i have tried using. from a in dc.Accounts where a.CompanyID == companyID && a.Archived == null select new { PriceTypeName = a.DisplayPriceType.Name}) but this seems to mess up my SortExpression. Any help on this would be much appreciated, I do understand that this is complex stuff.

    Read the article

  • ASP.MVC 2 Model Data Persistance

    - by toccig
    I'm and MVC1 programmer, new to the MVC2. The data will not persist to the database in an edit scenario. Create works fine. Controller: // // POST: /Attendee/Edit/5 [Authorize(Roles = "Admin")] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Attendee attendee) { if (ModelState.IsValid) { UpdateModel(attendee, "Attendee"); repository.Save(); return RedirectToAction("Details", attendee); } else { return View(attendee); } } Model: [MetadataType(typeof(Attendee_Validation))] public partial class Attendee { } public class Attendee_Validation { [HiddenInput(DisplayValue = false)] public int attendee_id { get; set; } [HiddenInput(DisplayValue = false)] public int attendee_pin { get; set; } [Required(ErrorMessage = "* required")] [StringLength(50, ErrorMessage = "* Must be under 50 characters")] public string attendee_fname { get; set; } [StringLength(50, ErrorMessage = "* Must be under 50 characters")] public string attendee_mname { get; set; } } I tried to add [Bind(Exclude="attendee_id")] above the Class declaration, but then the value of the attendee_id attribute is set to '0'. View (Strongly-Typed): <% using (Html.BeginForm()) {%> ... <%=Html.Hidden("attendee_id", Model.attendee_id) %> ... <%=Html.SubmitButton("btnSubmit", "Save") %> <% } %> Basically, the repository.Save(); function seems to do nothing. I imagine it has something to do with a primary key constraint violation. But I'm not getting any errors from SQL Server. The application appears to runs fine, but the data is never persisted to the Database.

    Read the article

  • Read varbinary data in java

    - by masoud farahani
    I made a Java application which reads some files from SQL server. Those files are saved to a varbinary(MAX) field in SQL Server by a third party web service. My problem is that when I want to read those files with my Java application, those binary data show different content in the Java application. In fact, I read data byte by byte and I figured out that some bytes did not show the real values which were saved in the database. I found out what the problem is, but I couldn’t find a solution yet. I found out that in the web service every varbinary data is saved to database as byte data (in .Net each byte takes 0 to 255). But, when I want to read the binary data in Java, it takes different values and cause an exception with some values, because in Java a byte value takes -127 to 127. In my Java application I want to write those data to a file by OutputStream.write(byte[]) method. How can I solve this problem? I think that I have to find a way to convert c# byte[] to a Java byte[] (or binary data), but how can I do that?

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • Enterprise integration of disparate systems

    - by Chris Latta
    We're about to embark on a fairly large integration effort to kill off a bunch of Access and Sql Server databases and get everything into one coherent enterprise system. There are also a number of other systems (accounting, CRM, payroll, MS Exchange) that hold critical data that we need to integrate (use for data validation in other systems), report on and otherwise expose. It is likely that some of these systems will change in the next few years, so we need to isolate our systems to be ready for change. Ideally we would be able to expose our forms in a consistent manner across as many of our our systems as possible without having to re-develop them for each system. We are currently targeting SharePoint (2007 and soon 2010), Office (2007 and soon 2010 - Word, Excel, PowerPoint and Outlook), Reporting Services, .Net console applications, .Net Windows applications, shell extensions, and with the possibility of exposing some functionality on mobile devices (BlackBerries currently, maybe iPhones later) and via our website. We're moving development to Visual Studio 2010 (from 2005) ahead of migrating to SharePoint 2010 and Office 2010. Given that most of our development is presently targeted to the .Net framework (mostly in C#) it seems logical to stick with this unless there is some compelling reason to switch frameworks/platform for some aspects. We're thinking of your standard Database-Data Integration layer-Business Objects Layer-Web Services (or REST) layer-Client Application plus doing our own client application with WPF (or something else?) forms that can also be exposed in the MS systems (SharePoint, Office, Windows). So, we don't want much, just everything :) Basically we need to isolate ourselves from database and systems changes, create an API that can be used throughout our systems and then make this functionality available in our client applications. I'm very keen to get pointers from anyone who has tips on how to pull this off. Should we look at the Enterprise Library as a place to start? Is REST with ASP.Net MVC2 a better solution than Web Services for a system like this? Will WPF deliver forms re-use or is there something better?

    Read the article

< Previous Page | 763 764 765 766 767 768 769 770 771 772 773 774  | Next Page >