Search Results

Search found 33139 results on 1326 pages for 'embedded database'.

Page 764/1326 | < Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >

  • Can I make TCP/IP session to run less than 60 seconds?

    - by Pavel
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). Apache "Keep Alive" option is set to 0

    Read the article

  • ASP.NET - Accessing copied content

    - by James Kolpack
    I have a class library project which contains some content files configured with the "Copy if newer" copy build action. This results in the files being copied to a folder under ...\bin\ for every project in the solution. In this same solution, I've got a ASP.NET web project (which is MVC, by the way). In the library I have a static constructor load the files into data structures accessible by the web project. Previously I've been including the content as an embedded resource. I now need to be able to replace them without recompiling. I want to access the data in three different contexts: Unit testing the library assembly Debugging the web application Hosting the site in IIS For unit testing, Environment.CurrentDirectory points to a path containing the copied content. When debugging however, it points to C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE. I've also looked at Assembly.GetExecutingAssembly().Location which points to C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\c44f9da4\9238ccc\assembly\dl3\eb4c23b4\9bd39460_f7d4ca01\. What I need is to the physical location of the webroot \bin folder, but since I'm in a static constructor in the library project, I don't have access to a Request.PhysicalApplicationPath. Is there some other environment variable or structure where I can always find my "Copy if newer" files?

    Read the article

  • MS Access Bulk Insert exits app

    - by Brij
    In the web service, I have to bulk insert data in MS Access database. First, There was single complex Insert-Select query,There was no error but app exited after inserting some records. I have divided the query to make it simple and using linq to remove complexity of query. now I am inserting records using for loop. there are approx 10000 records. Again, same problem. I put a breakpoint after loop, but No hit. I have used try catch, but no error found. For Each item In lstSelDel Try qry = String.Format("insert into Table(Id,link,file1,file2,pID) values ({0},""{1}"",""{2}"",""{3}"",{4})", item.WebInfoID, item.Links, item.Name, item.pName, pDateID) DAL.ExecN(qry) Catch ex As Exception Throw ex End Try Next Public Shared Function ExecN(ByVal SQL As String) As Integer Dim ret As Integer = -1 Dim nowConString As String = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + System.AppDomain.CurrentDomain.BaseDirectory + "DataBase\\mydatabase.mdb;" Dim nowCon As System.Data.OleDb.OleDbConnection nowCon = New System.Data.OleDb.OleDbConnection(nowConString) Dim cmd As New OleDb.OleDbCommand(SQL, nowCon) nowCon.Open() ret = cmd.ExecuteNonQuery() nowCon.Close() nowCon.Dispose() cmd.Dispose() Return ret End Function After exiting app, I see w3wp.exe uses more than 50% of Memory. Any idea, What is going wrong? Is there any limitation of MS Access?

    Read the article

  • Table not created by Hibernate

    - by User1
    I annotated a bunch of POJO's so JPA can use them to create tables in Hibernate. It appears that all of the tables are created except one very central table called "Revision". The Revision class has an @Entity(name="RevisionT") annotation so it will be renamed to RevisionT so there is not a conflict with any reserved words in MySQL (the target database). I delete the entire database, recreate it and basically open and close a JPA session. All the tables seem to get recreated without a problem. Why would a single table be missing from the created schema? What instrumentation can be used to see what Hibernate is producing and which errors? Thanks. UPDATE: I tried to create as a Derby DB and it was successful. However, one of the fields has a a name of "index". I use @org.hibernate.annotations.IndexColumn to specify the name to something other than a reserved word. However, the column is always called "index" when it is created. Here's a sample of the suspect annotations. @ManyToOne @JoinColumn(name="MasterTopID") @IndexColumn(name="Cx3tHApe") protected MasterTop masterTop; Instead of creating MasterTop.Cx3tHApe as a field, it creates MasterTop.Index. Why is the name ignored?

    Read the article

  • Passing session between jsf backing bean and model

    - by Rachel
    Background : I am having backing bean which has upload method that listen when file is uploaded. Now I pass this file to parser and in parser am doing validation check for row present in csv file. If validation fails, I have to log information and saving in logging table in database. My end goal : Is to get session information in logging bean so that I can get initialContext and make call to ejb to save data to database. What is happening : In my upload backing bean, am getting session but when i call parser, I do not pass session information as I do not want parser to be dependent on session as I want to unit test parser individually. So in my parser, I do not have session information, from parser am making call to logging bean(just a bean with some ejb methods) but in this logging bean, i need session because i need to get initial context. Question Is there a way in JSF, that I can get the session in my logging bean that I have in my upload backing bean? I tried doing: FacesContext ctx = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) ctx.getExternalContext().getSession(false); but session value was null, more generic question would be : How can I get session information in model bean or other beans that are referenced from backing beans in which we have session? Do we have generic method in jsf using which we can access session information throughout JSF Application?

    Read the article

  • Oracle manually add an FK constraint

    - by Oxymoron
    Alright, since a client wants to automate a certain process, which includes creating a new key structure in a LIVE database, I need to create relations between tables.columns. Now I've found the tables ALL_CONS_COLS en USER_CONSTRAINTS to hold information about constraints. If I were to manually create constraints, by inserting into these tables, I should be able to recreate the original constraints. My question: are there any more tables I should look into? Do you have an alternate suggestions, as this sounds VERY dirty and error prone to begin with. Current modus operandi: Create a new column in each table for the PK; Generate a guid for this PK; Create a new column in each table for the FKs; Fetch the guid associated with the FK; ....... done sofar...... Add new constraint based on the old one; Remove old constraint; Rename new columns; This is kind of dodgy and I'd rather change my method, any ideas would be helpful. To put it different, client wants to change key structure from int to guid on a live database. What's the best way to approach this

    Read the article

  • How do actually castings work at the CLR level?

    - by devoured elysium
    When doing an upcast or downcast, what does really happen behind the scenes? I had the idea that when doing something as: string myString = "abc"; object myObject = myString; string myStringBack = (string)myObject; the cast in the last line would have as only purpose tell the compiler we are safe we are not doing anything wrong. So, I had the idea that actually no casting code would be embedded in the code itself. It seems I was wrong: .maxstack 1 .locals init ( [0] string myString, [1] object myObject, [2] string myStringBack) L_0000: nop L_0001: ldstr "abc" L_0006: stloc.0 L_0007: ldloc.0 L_0008: stloc.1 L_0009: ldloc.1 L_000a: castclass string L_000f: stloc.2 L_0010: ret Why does the CLR need something like castclass string? There are two possible implementations for a downcast: You require a castclass something. When you get to the line of code that does an castclass, the CLR tries to make the cast. But then, what would happen had I ommited the castclass string line and tried to run the code? You don't require a castclass. As all reference types have a similar internal structure, if you try to use a string on an Form instance, it will throw an exception of wrong usage (because it detects a Form is not a string or any of its subtypes). Also, is the following statamente from C# 4.0 in a Nutshell correct? Upcasting and downcasting between compatible reference types performs reference conversions: a new reference is created that points to the same object. Does it really create a new reference? I thought it'd be the same reference, only stored in a different type of variable. Thanks

    Read the article

  • JasperReports reports access image via authenticated URL

    - by user363115
    Hi all, I am hoping that this is a simple issue with a simple solution and that I have missed something obvious. Let me explain the problem; We have an application that generates PDF reports (using Jasper). These reports contain data from our database, as well as imagery (photographs). These photographs are stored in S3. We use signed URLs to access these photographs. We link these photographs into our Jasper reports using these S3 URLs. Because the S3 URLs are signed and time-limited (by design), the process is as follows; User requests a report to be generated, Report is filled, and goes to our database (at which time UUIDs to any required images are retrieved), For each UUID an S3 signed URL must be generated, To do this the URL behind each report image is a call to an authenticated URL in our app (/get_img?uuid=foo), The controller behind this URL generates a signed S3 URL and returns it, Reports loads the image. The problem is with step (4) - the call to the authenticated URL fails because Jasper does not pass any authentication information with the request. Is there a solution here? Thanks all for your time. Ben

    Read the article

  • MYSQL: Limit Word Length for MySql Insert

    - by elmaso
    Hi, every search query is saved in my database, but I want to Limit the Chracterlength for one single word: odisafuoiwerjsdkle -- length too much -- dont write in the database my actually code is: $search = $_GET['q']; if (!($sql = mysql_query ('' . 'SELECT * FROM `history` WHERE `Query`=\'' . $search . '\''))) { exit ('<b>SQL ERROR:</b> 102, Cannot write history.'); ; } while ($row = mysql_fetch_array ($sql)) { $ID = '' . $row['ID']; } if ($ID == '') { mysql_query ('' . 'INSERT INTO history (Query) values (\'' . $search . '\')'); } if (!($sql = mysql_query ('SELECT * FROM `history` ORDER BY `ID` ASC LIMIT 1'))) { exit ('<b>SQL ERROR:</b> 102, Cannot write history.'); ; } while ($row = mysql_fetch_array ($sql)) { $first_id = '' . $row['ID']; } if (!($sql = mysql_query ('SELECT * FROM `history`'))) { exit ('<b>SQL ERROR:</b> 102, Cannot write history.'); ; }

    Read the article

  • ios UITableView reloadData don't increase content height

    - by Zoltan Varadi
    I'm facing a strange error in my iOS app and haven't been able to figure out the reason why. In my UITableView i can add new cells, so i refresh the array that is the datasource and call reloadData. Everything works without error. The numberOfRowsInSection is called again, and it's value is one more than it's previous value was. The new cell even gets inserted at the bottom of the tableview, but i cant scroll down to it. I only know that it's there because the tableview bounces and i can see it. I'm guessing the tableview's content height is not getting increased for some reason, but i have no idea why. I'm using iOS 6 btw. Any help is very much appreciated! Thanks, Zoli EDIT, answers for Srikanth's questions: How is data getting added. There's an NSArray containing objects of a specific type. The array.count is the number of the number of cells. This NSArray gets its values from a database query. When you say, you are refreshing the array, what do you mean. By refreshing the array i mean i execute a new query in the database and put the results of this query into an NSArray. this will be the tableview's datasource array. Like this: dataSourceArray = [dbManager executeQuery]; [tableView reloadData]; Are you adding the data within the same view controller Yes. Can you show some code as to what you are doing See above. Is the table view at the root of the view controllers view or is it within another view The tableview is the main view's first child. Can you try adding data to the array at the beginning, so that you can view the cell being added at the top. I don't understand what you mean.

    Read the article

  • MySQL Can Connect Remotely but not Locally

    - by A Wizard Did It
    This is a weird problem and I'm not sure what's going on. I installed MySQL on a linux box I have running Ubuntu 10.04 LTS. I can access mysql via SSH mysql -p and perform all my commands that way. I added a user, and I can use AddedUser to connect remotely from my machine, but not from the local machine. It makes no sense to me... SELECT host, user FROM mysql.user Yields: +-----------+------------------+ | host | user | +-----------+------------------+ | % | AddedUser | | 127.0.0.1 | root | | li241-255 | root | | localhost | debian-sys-maint | | localhost | root | +-----------+------------------+ Problem is I'm developing on this machine using Node.js, and I can't connect locally from the server using the same username. I've tried FLUSH PRIVILEGES but that seems to have no effect. I know it's not Node.js because I'm using the same code on another database and it's working in that environment. Edit This is the error node is giving me. node.js:50 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: ECONNREFUSED, Connection refused at Stream._onConnect (net.js:687:18) at IOWatcher.onWritable [as callback] (net.js:284:12) Edit 2 I have the right port & server as best I can tell. My /etc/mysql/my.cnf contains this: port = 3306 socket = /var/run/mysqld/mysqld.sock My MySQL object contains: { host: 'localhost', port: 3306, user: 'removed', password: 'removed', database: '', typeCast: true, flags: 260047, maxPacketSize: 16777216, charsetNumber: 192, debug: false, ending: false, connected: false, _greeting: null, _queue: [], _connection: null, _parser: null, server: 'ExternalIpAddress' } Possibly useful? netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 1016418 /var/run/mysqld/mysqld.sock

    Read the article

  • Performance improvement of client server system

    - by Tanuj
    I have a legacy client server system where the server maintains a record of some data stored in a sqlite database. The data is related to monitoring access patterns of files stored on the server. The client application is basically a remote viewer of the data. When the client is launched, it connects to the server and gets the data from the server to display in a grid view. The data gets updated in real time on the server and the view in the client automatically gets refreshed. There are two problems with the current implementation: When the database gets too big, it takes a lot of time to load the client. What are the best ways to deal with this. One option is to maintain a cache at the client side. How to best implement a cache ? How can the server maintain a diff so that it only sends the diff during the refresh cycle. There can be multiple clients and each client needs to display the latest data available on the server. The server is a windows service daemon. Both the client and the server are implemented in C#

    Read the article

  • Can I access views when drawn in the drawRect method?

    - by GianPac
    When creating the content view of a tableViewCell, I used the drawInRect:withFont, drawAtPoint... and others, inside drawRect: since all I needed was to lay some text. Turns out, part of the text drawn need to be clickable URLs. So I decided to create a UIWebView and insert it into my drawRect. Everything seems to be laid out fine, the problem is that interaction with the UIWebView is not happening. I tried enabling the interaction, did not work. I want to know, since with drawRect: I am drawing to the current graphics context, there is anything I can do to have my subviews interact with the user? here is some of my code I use inside the UITableViewCell class. -(void) drawRect:(CGRect)rect{ NSString *comment = @"something"; [comment drawAtPoint:point forWidth:195 withFont:someFont lineBreakMode:UILineBreakModeMiddleTruncation]; NSString *anotherCommentWithURL = @"this comment has a URL http://twtr.us"; UIWebView *webView = [[UIWebView alloc] initWithFrame:someFrame]; webView.delegate = self; webView.text = anotherCommentWithURL; [self addSubView:webView]; [webView release]; } As I said, the view draws fine, but there is no interaction with the webView from the outside. the URL gets embedded into HTML and should be clickable. It is not. I got it working on another view, but on that one I lay the views. Any ideas?

    Read the article

  • Can I use foreign key restrictions to return meaningful UI errors with PHP

    - by Shane
    I want to start by saying that I am a big fan of using foreign keys and have a tendency to use them even on small projects to keep my database from being filled with orphaned data. On larger projects I end up with gobs of keys which end up covering upwards of 8 - 10 layers of data. I want to know if anyone could suggest a graceful way of handling 'expected errors' from the MySQL database in a way that I can construct meaningful messages for the end user. I will explain 'expected errors' with an example. Lets say I have a set of tables used for basic discussions: discussion questions responses users Hierarchically they would probably look something like this: -users --discussion ---questions ----responses When I attempt to delete a user the FKs will check discussions and if any discussion exist the deletion is restricted, deleting discussion checks questions, deleting questions checks responses. An 'expected error' in this case would be attempting to delete a user--unless they are newly created I can anticipate that one or more foreign keys will fail causing an error. What I WANT to do is to catch that error on deletion and be able to tell the end user something like 'We're sorry, but all discussions must be removed before you can delete this user...'. Now I know I can keep and maintain matching arrays in PHP and map specific errors to messages but that is messy and prone to becoming stagnant, or I could manually run a set of selects prior to attempting the deletion, but then I am doing just as much work as without using FKs. Any help here would be greatly appreciated, or if I am just looking at this completely wrong then please let me know. On a side note I generally use CodeIgniter for my application development, so if that would open up an avenue through that framework please consider that in your answers. Thanks in Advance

    Read the article

  • Create Session Variable from different datasources?

    - by Szafranamn
    Currently I am developing a dynamic website using Dreamweaver cs5 with ColdFusion 9 and using Access to create my databases along with QuickBooks and QODBC to create database. I have established a login session variable stemming from the login page. This session variable is being drawn from one Datasource "Access" Table "Logininfo" Field "FullName" but I wanted to create another session variable either at this point or further into the member's page to use in a query sequence. This session variable would stem from another Datasoucre "QBs" Table "Invoice" Field "CustomerRefFullName" which is generated through Quickbooks and QODBC. I am not sure if this is possible but if it is how do I do it. I want to do this so I can query the Invoice database to upload the customer's Invoices unique to them onto their page. So it would have to be related to their login credentials. If there is another better route to take I would greatly appreciate the advice. Below is the login code if there is additional information needed let me know. This is my current thinking/plan to do what I wish to intend hence the need to create the session variable: I have another Datasource "QBs" with a Table "Invoice" when I create another webpage for the customer to see their invoice I need to create a recordset that accesses that Table. In order to do so I think the best way would some home convert the session.FullName (which came from Access Datasource, Logininfor Table) into a session.CustomerRefFullName (which would have to come from (Datasource: QBs Table: Invoice Field: CustomerRefFullName) that way I could set the query WHERE CustomerRefFullName and have each logged in user see their specific Invoices. So is there a way to turn the session variable off one datasource/table into a different sessionvariable off a new datasource/table even if it is unique just to that page??? <cfif IsDefined("FORM.username")> SELECT FullName, Username,Password,AccessLevels FROM Logininfo WHERE Username= AND Password=

    Read the article

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • Can a WebServiceHost be changed to avoid the use of HttpListener?

    - by sbyse
    I am looking for a way to use a WCF WebServiceHost without having to rely on the HttpListener class and it's associated permission problems (see this question for details). I'm working on a application which communicates locally with another (third-party) application via their REST API. At the moment we are using WCF as an embedded HTTP server. We create a WebServiceHost as follows: String hostPath = "http://localhost:" + portNo; WebServiceHost host = new WebServiceHost(typeof(IntegrationService), new Uri(hostPath)); // create a webhttpbinding for rest/pox and enable cookie support for session management WebHttpBinding webHttpBinding = new WebHttpBinding(); webHttpBinding.AllowCookies = true; ServiceEndpoint ep = host.AddServiceEndpoint(typeof(IIntegrationService), webHttpBinding, ""); host.Open() ChannelFactory<IIntegrationService> cf = new ChannelFactory<IIntegrationService>(webHttpBinding, hostPath); IIntegrationService channel = cf.CreateChannel(); Everything works nicely as long as our application is run as administrator. If we run our application on a machine without administrative privileges the host.Open() will throw an HttpListenerException with ErrorCode == 5 (ERROR_ACCESS_DENIED). We can get around the problem by running httpcfg.exe from the command line but this is a one-click desktop application and that's not really as long term solution for us. We could ditch WCF and write our own HTTP server but I'd like to avoid that if possible. What's the easiest way to replace HttpListener with a standard TCP socket while still using all of the remaining HTTP scaffolding that WCF provides?

    Read the article

  • Managing Many to Many relationships in asp.net Wizard Control

    - by Luis
    Say I have this entity with a lot of attributes. In the input form I have decided to implement a wizard control so I can collect information about this entity in several steps. The problem is that I need to collect information that has been modeled has many to many relationships. I am planning to use a telerik gridview to manage this (add/edit/delete), the problem is where do I store that data since the entity in a insert form is not created on the database yet. OK so I can store all that info in temporary lists residing in the viewstate, waiting for the final submit where I dump all that in the DB, but one of the steps I am collecting files...now storing files in the viewstate is out of the question, same as as storing them in the session... I have been thinking of implementing in a way that the user has to submit some info first (say first 3 steps), commit the data to the database creating the parent entity and then start inserting all the childs entities...but this will get weird as it's confusing since on the first steps you not saving the data to the DB and on the next ones you are commiting directly... Anyone has any thoughts on this? Thanks

    Read the article

  • Embed URL from variable in HTML file in Rails application

    - by TejasM
    hi, I have a doubt which might be silly for experience Rails developers. I am creating an application which is suppose to embed a video player with URL passed to it. So while displaying i.e in XXX.html.erb file i am writting below code. Now problem is @movie.trailer is my variable in ruby code which has URL value . I want the embedded video to load with URL given by this variable value. Any suggestion how am i suppose to place the value of Ruby variable(@movie.trailer) in part. <object width="425" height="344"> <param name="movie" value="<% @movie.trailer %>"> </param><param name="allowFullScreen" value="true"> </param><param name="allowscriptaccess" alue="always"> </param><embed src="<% @movie.trailer %>" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object> Note :- This code is working perfectly fine if i statically give value of URL. Please help

    Read the article

  • Best Method For High Data Availability for SQL Server

    - by omatase
    I have a web service that runs 24/7. Periodically it needs to refresh its database with data from another web service. There is a lot of data. It's tens of thousands of rows. (no, I don't mean this is a lot of data for SQL Server, just trying to point out that I expect it to take some time to come down the pipe from the other web service) The data refresh can take between 5 and 10 minutes. The actual data update portion of that is between 1 and 2 minutes. This means the service would be down for all intents and purposes when consumers would be requesting this type of data. I would like to implement a system where data is always available. The only thing that comes to mind is some type of system where I maintain two separate databases. I populate the inactive one, swapping it to active before populating the other one. I'm not sure I know the best way to do this. My current ideas just revolve around two sets of the schema in a single database (using views to access the active set) or two databases each with the same schema. The application would rotate between the two databases. Any suggestions from someone who has done something like this before?

    Read the article

  • GWT: creating a text widget for highly customized data entry

    - by Caffeine Coma
    I'm trying to implement a kind of "guided typing" widget for data entry, in which the user's text entry is highly controlled and filtered. When the user types a particular character I need to intercept and filter it before displaying it in the widget. Imagine if you will, a Unix shell embedded as a webapp; that's the kind of thing I'm trying to implement. I've tried two approaches. In the first, I extend a TextArea, and add a KeyPressHandler to filter the characters. This works, but the browser-provided spelling correction is totally inappropriate, and I don't see how to turn it off. I've tried: DOM.setElementProperty(textArea.getElement(), "spellcheck", "false"); But that seems to have no effect- I still get the red underlines over "typos". In the second approach I use a FocusWidget to get KeyPress events, and a separate Label or HTML widget to present the filtered characters back to the user. This avoids the spelling correction issue, but since the FocusWidget is not a TextArea, the browser tends to intercept certain typed characters for internal use; e.g. FireFox will use the "/" character to begin a "Quick Find" on the page, and hitting Backspace will load the previous web page. Is there a better way to accomplish what I'm trying to do?

    Read the article

  • Remote connection to SQL Server Express fails

    - by worlds-apart89
    I have two computers that share the same Internet IP address. Using one of the computers, I can remotely connect to a SQL Server database on the other. Here is my connection string: SqlConnection connection = new SqlConnection(@"Data Source=192.168.1.101\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); 192.168.1.101 is the server, SQLEXPRESSNI is the SQL Server instance name, and FirstDB is the name of the database. Now, I have another computer with a different Internet IP address. I want to connect to the server above using the third computer that does not belong to my local area network. I dont have access to that third computer at the moment, so I want to use (if possible) the client computer in LAN again. SqlConnection connection = new SqlConnection(@"Data Source=SharedInternetIP\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); Does not work Note that I am a beginner, so I am not quite sure what I am doing even though I know what I want to do. By passing the Internet IP to the SqlConnection object rather than the local IP address, how can I successfully connect to the server computer, using the client computer in the same network? Also note that my ultimate goal is to connect to the server with an external client, but I don't have access to that computer right now. I'd appreciate any help.

    Read the article

  • JAVA MySql multiple word search

    - by user1703849
    i have a database in MySql that has a name column in it which contains several words(description). I am connected to database with java through eclipse. I have a search, that returns results if only name field contains one word. id: name: info: type: 1 balloon big red balloon big 2 house expensive beautiful luxury 3 chicken wings deep fried wings tasty these are just random words but as an example my search can only see ex. balloon and then show info, but if i type chicken wings, it does nothing. so it possible somehow to search from columns with multiple words? this is my search code below import java.io.*; import java.sql.*; import java.util.*; class Search { public static void main(String[] args) { Scanner inp``ut = new Scanner(System.in); try { Connection con = DriverManager.getConnection( "jdbc:mysql://example/mydb", "user", "password"); Statement stmt = (Statement) con.createStatement(); System.out.print("enter search: "); String name = input.next(); String SQL = "SELECT * FROM menu where name LIKE '" + name + "'"; ResultSet rs = stmt.executeQuery(SQL); while (rs.next()) { System.out.println("Name: " +rs.getString("name")); System.out.println("Description: " + rs.getString("info") ); System.out.println("Price: " + rs.getString("Price")); } } catch (Exception e) { System.out.println("ERROR: " + e.getMessage()); } } }

    Read the article

  • Java "Pool" of longs or Oracle sequence with reusable values

    - by Anthony Accioly
    Several months ago I implemented a solution to choose unique values from a range between 1 and 65535 (16 bits). This range is used to generate unique Route Targets suffixes, which for this customer massive network (it's a huge ISP) are a very disputed resource, so any free index needs to become immediately available to the end user. To tackle this requirement I used a BitSet. Allocate on the RT index with set and deallocate a suffix with clear. The method nextClearBit() can find the next available index. I handle synchronization / concurrency issues manually. This works pretty well for a small range... The entire index is small (around 10k), it is blazing fast and can be easy serialized into a Blob field. The problem is, some new devices can handle RTs of 32 bits (range 1 / 4294967296). Which can't be managed with a BitSet (it would, by itself, consume around 600Mb, plus be limited to int range). Even with this massive range available, the client still wants to free available Route Targets for the end user, mainly because the lowest ones (up to 65535) - which are compatible with old routers - are being heavily disputed. Before I tell the customer that this is impossible and he will have to conform with my reusable index for lower RTs (up to 65550) and use a database sequence for the other ones (which means that when the user frees a Route Target, it will not become available again). Would anyone shed some light? Maybe some kind soul already implemented a high performance number pool for Java (6 if it matters), or I am missing a killer feature of Oracle database (11R2 if it matters)... Wishful thinking. Thank you very much in advance.

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

< Previous Page | 760 761 762 763 764 765 766 767 768 769 770 771  | Next Page >