Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 778/1330 | < Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >

  • How can I introduce a regex action to match the first element in a Catalyst URI ?

    - by RET
    Background: I'm using a CRUD framework in Catalyst that auto-generates forms and lists for all tables in a given database. For example: /admin/list/person or /admin/add/person or /admin/edit/person/3 all dynamically generate pages or forms as appropriate for the table 'person'. (In other words, Admin.pm has actions edit, list, add, delete and so on that expect a table argument and possibly a row-identifying argument.) Question: In the particular application I'm building, the database will be used by multiple customers, so I want to introduce a URI scheme where the first element is the customer's identifier, followed by the administrative action/table etc: /cust1/admin/list/person /cust2/admin/add/person /cust2/admin/edit/person/3 This is for "branding" purposes, and also to ensure that bookmarks or URLs passed from one user to another do the expected thing. But I'm having a lot of trouble getting this to work. I would prefer not to have to modify the subs in the existing framework, so I've been trying variations on the following: sub customer : Regex('^(\w+)/(admin)$') { my ($self, $c, @args) = @_; #validation of captured arg snipped.. my $path = join('/', 'admin', @args); $c->request->path($path); $c->dispatcher->prepare_action($c); $c->forward($c->action, $c->req->args); } But it just will not behave. I've used regex matching actions many times, but putting one in the very first 'barrel' of a URI seems unusually traumatic. Any suggestions gratefully received.

    Read the article

  • Function returning a class containing a function returning a class

    - by Scott
    I'm working on an object-oriented Excel add-in to retrieve information from our ERP system's database. Here is an example of a function call: itemDescription = Macola.Item("12345").Description Macola is an instance of a class which takes care of database access. Item() is a function of the Macola class which returns an instance of an ItemMaster class. Description() is a function of the ItemMaster class. This is all working correctly. Items can be be stored in more than one location, so my next step is to do this: quantityOnHand = Macola.Item("12345").Location("A1").QuantityOnHand Location() is a function of the ItemMaster class which returns an instance of the ItemLocation class (well, in theory anyway). QuantityOnHand() is a function of the ItemLocation class. But for some reason, the ItemLocation class is not even being intialized. Public Function Location(inventoryLocation As String) As ItemLocation Set Location = New ItemLocation Location.Item = item_no Location.Code = inventoryLocation End Function In the above sample, the variable item_no is a member variable of the ItemMaster class. Oddly enough, I can successfully instantiate the ItemLocation class outside of the ItemMaster class in a non-class module. Dim test As New ItemLocation test.Item = "12345" test.Code = "A1" quantityOnHand = test.QuantityOnHand Is there some way to make this work the way I want? I'm trying to keep the API as simple as possible. So that it only takes one line of code to retrieve a value.

    Read the article

  • MVVM: Do I need Inheritance with ViewModels A + B ?

    - by Lisa
    Hello guys my first post on SO because EE sucks in the meantime ;P I am using wpf and mvvm in my desktop application. Scenario: I have a calendar with week A and week B which are rotating by every X week depending on the user settings. But the UserControl "week B" is only visible when the user sets the option "rotating weeks"... The UserControl with week A has a DataGrid and for week B I want to use the same UserControl of course. What I want to achieve is that all data entered/choosen by the user in the Week A is saved/backed by a ViewModel A and Model C. When the user wants a rotating weekly calendar plan I need also a ViewModel B and again Model C. The reason why I need to know what data entered by the user belongs to week A or week B is because I have to write the entered data in a certain order into the database = db.Write(weekA),db.Write(weekB),db.Write(weekA),etc... I am unsure how a solution could look like... What would you do to identify a ViewModel A or B so you know the order of how to write the data in the proper order into database? Any other suggestions are also welcome of course, maybe I think in the wrong direction its late here :) I am new to mvvm so please be patient.

    Read the article

  • can I put my sqlite connection and cursor in a function?

    - by steini
    I was thinking I'd try to make my sqlite db connection a function instead of copy/pasting the ~6 lines needed to connect and execute a query all over the place. I'd like to make it versatile so I can use the same function for create/select/insert/etc... Below is what I have tried. The 'INSERT' and 'CREATE TABLE' queries are working, but if I do a 'SELECT' query, how can I work with the values it fetches outside of the function? Usually I'd like to print the values it fetches and also do other things with them. When I do it like below I get an error Traceback (most recent call last): File "C:\Users\steini\Desktop\py\database\test3.py", line 15, in <module> for row in connection('testdb45.db', "select * from users"): ProgrammingError: Cannot operate on a closed database. So I guess the connection needs to be open so I can get the values from the cursor, but I need to close it so the file isn't always locked. Here's my testing code: import sqlite3 def connection (db, arg): conn = sqlite3.connect(db) conn.execute('pragma foreign_keys = on') cur = conn.cursor() cur.execute(arg) conn.commit() conn.close() return cur connection('testdb.db', "create table users ('user', 'email')") connection('testdb.db', "insert into users ('user', 'email') values ('joey', 'foo@bar')") for row in connection('testdb45.db', "select * from users"): print row How can I make this work?

    Read the article

  • Integrated Windows authentication in IIS causing ADO.NET failure

    - by TrueWill
    We have a .NET 3.5 Web Service running under IIS. It must use identity impersonate="true" and Integrated Windows authentication in order to authenticate to third-party software. In addition, it connects to a SQL Server database using ADO.NET and SQL Server Authentication (specifying a fixed User ID and Password in the connection string). Everything worked fine until the database was moved to another SQL Server. Then the Web Service would throw the following exception: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) This error only occurs if identity impersonate is true in the Web.config. Again, the connection string hasn't changed and it specifies the user. I have tested the connection string and it works, both under the impersonated account and under the service account (and from both the remote machine and the server). What needs to be changed to get this to work with impersonation?

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • C# Linq to SQL connection string (newbie)

    - by Chris'o
    i am a new linq to sql learner and this is my very first attempt to create a data viewer program. The idea is simple, i'd like to create a software that is able to view content of a table in a database. That's it. I got an early problem here already and i have seen many tutes and articles online but I still cant fix the bug. Here is my code: static void Main(string[] args) { string cs = "Data Source=localhost;Initial Catalog=somedb;Integrated Security=SSPI;"; var db = new DataClasses1DataContext(cs); db.Connection.Open(); foreach (var b in db.Mapping.GetTables()) Console.WriteLine(b.TableName); Console.ReadKey(true); } When I tried to check db.connection.equals(null); it returns false, so i thought i have connected successfully to the database since there is no error at all. But the code above doesn't print anything out to the screen. I kind of lost and don't know what's going on here. Does anyone know what is going wrong here?

    Read the article

  • Running Cargo From Maven antrun Plugin

    - by Frank
    I have a maven (multi-module) project creating some WAR and EAR files for JBoss AS 7.1.x. For one purpose, I need to deploy one generated EAR file of one module to a fresh JBoss instance and run it, call some REST web service calls against it and stop JBoss. Then I need to package the results of these calls that were written to the database. Currently, I am trying to use CARGO and the maven ant run plugin to perform this task. Unfortunately, I cannot get the three (maven, ant run and CARGO) to play together. I don't have the uberjar that is used in the ant examples of cargo. How can I configure the ant run task so that the cargo ant task can create, start, deploy my JBoss? Ideally the one unpacked and configured by the cargo-maven2-plugin in another phase? Or, is there a better way to achieve my goal of creating the database? I cannot really use the integration-test phase, as it is executed after the package phase. So, I plan to do it all in compile phase using antrun.

    Read the article

  • how to handle empty selection in a jface bound combobox?

    - by guido
    I am developing a search dialog in my eclipse-rcp application. In the search dialog I have a combobox as follows: comboImp = new CCombo(grpColSpet, SWT.BORDER | SWT.READ_ONLY); comboImp.setBounds(556, 46, 184, 27); comboImpViewer = new ComboViewer(comboImp); comboImpViewer.setContentProvider(new ArrayContentProvider()); comboImpViewer.setInput(ImpContentProvider.getInstance().getImps()); comboImpViewer.setLabelProvider(new LabelProvider() { @Override public String getText(Object element) { return ((Imp)element).getImpName(); } }); Imp is a database entity, ManyToOne to the main entity which is searched, and ImpContentProvider is the model class which speaks to embedded sqlite database via jpa/hibernate. This combobox is supposed to contain all instances of Imp, but to also let empty selection; it's value is bound to a service bean as follows: IObservableValue comboImpSelectionObserveWidget = ViewersObservables.observeSingleSelection(comboImpViewer); IObservableValue filterByImpObserveValue = BeansObservables.observeValue(searchPrep, "imp"); bindingContext.bindValue(comboImpSelectionObserveWidget, filterByImpObserveValue , null, null); As soon as the user clicks on the combo, a selection (first element) is made: I can see the call to a selectionlistener i added on the viewer. My question is: after a selection has been made, how do I let the user change his mind and have an empty selection in the combobox? should I add a "fake" empty instance of Imp to the List returned by the ImpContentProvider? or should I implement an alternative to ArrayContentProvider? and one additional related question is: why calling deselectAll() and clearSelection() on the combo does NOT set a null value to the bound bean?

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • [MFC] What is the reciprocal of CComboBox.GetItemData?

    - by Hamish Grubijan
    Instead of associating objects with Combo Box items, I associate long ids representing choices. They come from a database, so it seems natural to do so anyway. Now, I persist the id and not the index of the user's selection, so that the choice is remembered across sessions. If id no longer exists in database - no big deal. The choice will be messed up once. If db does not change, however, then it would be a great success ;) Here is how I get the id : chosenSomethingIndex = cmbSomething.GetCurSel(); lastSomethingId = cmbSomething.GetItemData(chosenSomethingIndex); How do I reverse this? When I load the stored value for user's last choice, I need to convert that id into an index. I can do: cmbSomething.SetCurSel(chosenSomethingIndex); However, how can I attempt (it might not exist) to get an index once I have an id? I am looking for a reciprocal function to GetItemData I am using VS2008, probably latest version of MFC, whatever that is. Thank you.

    Read the article

  • How to use db4o IObjectContainer in a web application ? (Container lifetime ?)

    - by driis
    I am evaluating db4o for persistence for a ASP .NET MVC project. I am wondering how I should use the IObjectContainer in a web context with regards to object lifetime. As I see it, I can do one of the following: Create the IObjectContainer at application startup and keep the same instance for the entire application lifetime. Create one IObjectContainer per request. Start a server, and get a client IObjectContainer for each database interaction. What are the implications of these options, in terms of performance and concurrency ? Since the database is locked when an IObjectContainer is opened, I am pretty sure that option 2) would get me some problems with concurrency - would this also be the case for option 1 ? As I understand it, if I retrieve an object from an IObjectContainer, it must be saved by the same IObjectContainer instance - in order for db4o to identify it as being the same object. Therefore, If I choose option 3), I would have to retrieve the original object, make the necessary changes (copy data from a modified object), and then store it using the same IObjectContainer. Is this true ?

    Read the article

  • C# Interface Method calls from a controller

    - by ArjaaAine
    I was just working on some application architecture and this may sound like a stupid question but please explain to me how the following works: Interface: public interface IMatterDAL { IEnumerable<Matter> GetMattersByCode(string input); IEnumerable<Matter> GetMattersBySearch(string input); } Class: public class MatterDAL : IMatterDAL { private readonly Database _db; public MatterDAL(Database db) { _db = db; LoadAll(); //Private Method } public virtual IEnumerable<Matter> GetMattersBySearch(string input) { //CODE return result; } public virtual IEnumerable<Matter> GetMattersByCode(string input) { //CODE return results; } Controller: public class MatterController : ApiController { private readonly IMatterDAL _publishedData; public MatterController(IMatterDAL publishedData) { _publishedData = publishedData; } [ValidateInput(false)] public JsonResult SearchByCode(string id) { var searchText = id; //better name for this var results = _publishedData.GetMattersBySearch(searchText).Select( matter => new { MatterCode = matter.Code, MatterName = matter.Name, matter.ClientCode, matter.ClientName }); return Json(results); } This works, when I call my controller method from jquery and step into it, the call to the _publishedData method, goes into the class MatterDAL. I want to know how does my controller know to go to the MatterDAL implementation of the Interface IMatterDAL. What if I have another class called MatterDAL2 which is based on the interface. How will my controller know then to call the right method? I am sorry if this is a stupid question, this is baffling me.

    Read the article

  • Not getting key value from Identity column back after inserting new row with SubSonic ActiveRecord

    - by mikedevenney
    I'm sure I'm missing the obvious answer here, but could use a hand. I'm new to SubSonic and using version 3. I've got myself to the point of being able to query and insert, but I'm stuck with how I would get the value of the identity column back after my insert. I saw another post that mentioned Linq Templates. I'm not using those (at least I don't think I am...?) TIA ... UPDATE ... So I've been debugging through my code watching how the SubSonic code works and I found where the indentity column is being ignored. I use int as the datatype for my ID columns in the database and set them as identity. Since int is a non-nullable data type in c# the logical test in the Add method (public void Add(IDataProvider provider)) that checks if there is a value in the key column by doing a (key==null) could be the issue. The code that gets the new value for the identity field is in the 'true path', since an int can't be null and I use ints as my identity column data types this test will never pass. The ID field for my object has a 0 in it that I didn't put there. I assume it's set during the initialization of the object. Am I off base here? Is the answer to change my data types in the database? Another question (more a curiosity). I noticed that some of the properties in the generated classes are declared with a ? after the datatype. I'm not familiar with this declaration construct... what gives? There are some declared as an int (non key fields) and others that are declared as int? (key fields). Does this have something to do with how they're treated at initialization? Any help is appreciated! --BUMP--

    Read the article

  • Javascript function with PHP throwing a "Illegally Formed XML Syntax" error

    - by Joe
    I'm trying to learn some javascript and i'm having trouble figuring out why my code is incorrect (i'm sure i'm doing something wrong lol), but anyways I am trying to create a login page so that when the form is submitted javascript will call a function that checks if the login is in a mysql database and then checks the validity of the password for the user if they exist. however I am getting an error (Illegally Formed XML Syntax) i cannot resolve. I'm really confused, mostly because netbeans is saying it is a xml syntax error and i'm not using xml. here is the code in question: function validateLogin(login){ login.addEventListener("input", function() { $value = login.value; if (<?php //connect to mysql mysql_connect(host, user, pass) or die(mysql_error()); echo("<script type='text/javascript'>"); echo("alert('MYSQL Connected.');"); echo("</script>"); //select db mysql_select_db() or die(mysql_error()); echo("<script type='text/javascript'>"); echo("alert('MYSQL Database Selected.');"); echo("</script>"); //query $result = mysql_query("SELECT * FROM logins") or die(mysql_error()); //check results against given login while($row = mysql_fetch_array($result)){ if($row[login] == $value){ echo("true"); exit(0); } } echo("false"); exit(0); ?>) { login.setCustomValidity("Invalid Login. Please Click 'Register' Below.") } else { login.setCustomValidity("") } }); } the code is in an external js file and the error throws on the last line. Also from reading i understand best practices is to not mix js and php so how would i got about separating them but maintaining the functionality i need? thanks!

    Read the article

  • Read varbinary data in java

    - by masoud farahani
    I made a Java application which reads some files from SQL server. Those files are saved to a varbinary(MAX) field in SQL Server by a third party web service. My problem is that when I want to read those files with my Java application, those binary data show different content in the Java application. In fact, I read data byte by byte and I figured out that some bytes did not show the real values which were saved in the database. I found out what the problem is, but I couldn’t find a solution yet. I found out that in the web service every varbinary data is saved to database as byte data (in .Net each byte takes 0 to 255). But, when I want to read the binary data in Java, it takes different values and cause an exception with some values, because in Java a byte value takes -127 to 127. In my Java application I want to write those data to a file by OutputStream.write(byte[]) method. How can I solve this problem? I think that I have to find a way to convert c# byte[] to a Java byte[] (or binary data), but how can I do that?

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • ASP.MVC 2 Model Data Persistance

    - by toccig
    I'm and MVC1 programmer, new to the MVC2. The data will not persist to the database in an edit scenario. Create works fine. Controller: // // POST: /Attendee/Edit/5 [Authorize(Roles = "Admin")] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Attendee attendee) { if (ModelState.IsValid) { UpdateModel(attendee, "Attendee"); repository.Save(); return RedirectToAction("Details", attendee); } else { return View(attendee); } } Model: [MetadataType(typeof(Attendee_Validation))] public partial class Attendee { } public class Attendee_Validation { [HiddenInput(DisplayValue = false)] public int attendee_id { get; set; } [HiddenInput(DisplayValue = false)] public int attendee_pin { get; set; } [Required(ErrorMessage = "* required")] [StringLength(50, ErrorMessage = "* Must be under 50 characters")] public string attendee_fname { get; set; } [StringLength(50, ErrorMessage = "* Must be under 50 characters")] public string attendee_mname { get; set; } } I tried to add [Bind(Exclude="attendee_id")] above the Class declaration, but then the value of the attendee_id attribute is set to '0'. View (Strongly-Typed): <% using (Html.BeginForm()) {%> ... <%=Html.Hidden("attendee_id", Model.attendee_id) %> ... <%=Html.SubmitButton("btnSubmit", "Save") %> <% } %> Basically, the repository.Save(); function seems to do nothing. I imagine it has something to do with a primary key constraint violation. But I'm not getting any errors from SQL Server. The application appears to runs fine, but the data is never persisted to the Database.

    Read the article

  • Can someone recommend a good tutorial on MySQL indexes, specifically when used in an order by clause

    - by Philip Brocoum
    I could try to post and explain the exact query I'm trying to run, but I'm going by the old adage of, "give a man a fish and he'll eat for a day, teach a man to fish and he'll eat for the rest of his life." SQL optimization seems to be very query-specific, and even if you could solve this one particular query for me, I'm going to have to write many more queries in the future, and I'd like to be educated on how indexes work in general. Still, here's a quick description of my current problem. I have a query that joins three tables and runs in 0.2 seconds flat. Awesome. I add an "order by" clause and it runs in 4 minutes and 30 seconds. Sucky. I denormalize one table so there is one fewer join, add indexes everywhere, and now the query runs in... 20 minutes. What the hell? Finally, I don't use a join at all, but rather a subquery with "where id in (...) order by" and now it runs in 1.5 seconds. Pretty decent. What in God's name is going on? I feel like if I actually understood what indexes were doing I could write some really good SQL. Anybody know some good tutorials? Thanks!

    Read the article

  • Use a foreign key mapping to get data from the other table using Python and SQLAlchemy.

    - by Az
    Hmm, the title was harder to formulate than I thought. Basically, I've got these simple classes mapped to tables, using SQLAlchemy. I know they're missing a few items but those aren't essential for highlighting the problem. class Customer(object): def __init__(self, uid, name, email): self.uid = uid self.name = name self.email = email def __repr__(self): return str(self) def __str__(self): return "Cust: %s, Name: %s (Email: %s)" %(self.uid, self.name, self.email) The above is basically a simple customer with an id, name and an email address. class Order(object): def __init__(self, item_id, item_name, customer): self.item_id = item_id self.item_name = item_name self.customer = None def __repr__(self): return str(self) def __str__(self): return "Item ID %s: %s, has been ordered by customer no. %s" %(self.item_id, self.item_name, self.customer) This is the Orders class that just holds the order information: an id, a name and a reference to a customer. It's initialised to None to indicate that this item doesn't have a customer yet. The code's job will assign the item a customer. The following code maps these classes to respective database tables. # SQLAlchemy database transmutation engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() customers_table = Table('customers', metadata, Column('uid', Integer, primary_key=True), Column('name', String), Column('email', String) ) orders_table = Table('orders', metadata, Column('item_id', Integer, primary_key=True), Column('item_name', String), Column('customer', Integer, ForeignKey('customers.uid')) ) metadata.create_all(engine) mapper(Customer, customers_table) mapper(Orders, orders_table) Now if I do something like: for order in session.query(Order): print order I can get a list of orders in this form: Item ID 1001: MX4000 Laser Mouse, has been ordered by customer no. 12 What I want to do is find out customer 12's name and email address (which is why I used the ForeignKey into the Customer table). How would I go about it?

    Read the article

  • Random problem connecting to MySQL

    - by CharlesLeaf
    Environment: RHEL 5 servers, MySQL 5.1.43, PHP 5.1.6 (using MySQLi). Currently only available within our internal VPN network. Servers ServerA: Webserver ServerB/C/D: Database server (1 master 2 slaves) The error (on ServerA) [Tue May 25 11:12:17 2010] [error] [client CLIENTIP] PHP Warning: mysqli::real_connect() [function.mysqli-real-connect]: (HY000/2003): Can't connect to MySQL server on 'ServerB' (4) in /home/**/Database.php on line 67, referer: [website] Problem description It appears that at completely random times, our website is unable to connect to one of the MySQL servers - usually the Master. Except for the forementioned error message, there is nothing to be found in any of the logs as far as I can see, and most of the times the connection is succesful and everything works as it should. It's just at completely random times, this error pops up. There's no firewall blocking any internal traffic, timeout value is 3 but it doesn't take 3 seconds before it fails to connect. With the default mysql client I can connect from ServerA to ServerB,C and D and haven't encountered a problem yet. Does anyone have a clue what I might be overlooking / could be the problem? Because I've run out of ideas myself.

    Read the article

  • RegQueryValueEx not working with a Release version but working fine with Debug

    - by Nux
    Hi. I'm trying to read some ODBC details form a registry and for that I use RegQueryValueEx. The problem is when I compile the release version it simply cannot read any registry values. The code is: CString odbcFuns::getOpenedKeyRegValue(HKEY hKey, CString valName) { CString retStr; char *strTmp = (char*)malloc(MAX_DSN_STR_LENGTH * sizeof(char)); memset(strTmp, 0, MAX_DSN_STR_LENGTH); DWORD cbData; long rret = RegQueryValueEx(hKey, valName, NULL, NULL, (LPBYTE)strTmp, &cbData); if (rret != ERROR_SUCCESS) { free(strTmp); return CString("?"); } strTmp[cbData] = '\0'; retStr.Format(_T("%s"), strTmp); free(strTmp); return retStr; } I've found a workaround for this - I disabled Optimization (/Od), but it seems strange that I needed to do that. Is there some other way? I use Visual Studio 2005. Maybe it's a bug in VS? Almost forgot - the error code is 2 (as the key wouldn't be found).

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • Is excessive DataTable usage bad?

    - by Justin R.
    I was recently asked to assist another team in building an ASP .NET website. They already have a significant amount of code written -- I was specifically asked build a few individual pages for the site. While exploring the code for the rest of the site, the amount of DataTables being constructed jumped out at me. Being a relatively new in the field, I've never worked on an application that utilizes a database as much as this site does, so I'm not sure how common this is. It seems that whenever data is queried from our database, the results are stored in a DataTable. This DataTable is then usually passed around by itself, or it's passed to a constructor. Classes that are initialized with a DataTable always assign the DataTable to a private/protected field, however only a few of these classes implement IDisposable. In fact, in the thousands of lines of code that I've browsed so far, I have yet to see the Dispose method called on a DataTable. If anything, this doesn't seem to be good OOP. Is this something that I should worry about? Or am I just paying more attention to detail than I should? Assuming you're most experienced developers than I am, how would you feel or react if someone who was just assigned to help you with your site approached you about this "problem"?

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

< Previous Page | 774 775 776 777 778 779 780 781 782 783 784 785  | Next Page >