Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 1225/1300 | < Previous Page | 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232  | Next Page >

  • What is the best way to store static data in C# that will never changes

    - by Luke101
    I have a class that stores data in asp.net c# application that never changes. I really don't want to put this data in the database - I would like it to stay in the application. Here is my way to store data in the application: public class PostVoteTypeFunctions { private List<PostVoteType> postVotes = new List<PostVoteType>(); public PostVoteTypeFunctions() { PostVoteType upvote = new PostVoteType(); upvote.ID = 0; upvote.Name = "UpVote"; upvote.PointValue = PostVotePointValue.UpVote; postVotes.Add(upvote); PostVoteType downvote = new PostVoteType(); downvote.ID = 1; downvote.Name = "DownVote"; downvote.PointValue = PostVotePointValue.DownVote; postVotes.Add(downvote); PostVoteType selectanswer = new PostVoteType(); selectanswer.ID = 2; selectanswer.Name = "SelectAnswer"; selectanswer.PointValue = PostVotePointValue.SelectAnswer; postVotes.Add(selectanswer); PostVoteType favorite = new PostVoteType(); favorite.ID = 3; favorite.Name = "Favorite"; favorite.PointValue = PostVotePointValue.Favorite; postVotes.Add(favorite); PostVoteType offensive = new PostVoteType(); offensive.ID = 4; offensive.Name = "Offensive"; offensive.PointValue = PostVotePointValue.Offensive; postVotes.Add(offensive); PostVoteType spam = new PostVoteType(); spam.ID = 0; spam.Name = "Spam"; spam.PointValue = PostVotePointValue.Spam; postVotes.Add(spam); } } When the constructor is called the code above is ran. I have some functions that can query the data above too. But is this the best way to store information in asp.net? if not what would you recommend?

    Read the article

  • Conditional WHERE Clauses in SQL Server 2008

    - by user336786
    Hello, I am trying to execute a query on a table in my SQL Server 2008 database. I have a stored procedure that uses five int parameters. Currently, my parameters are defined as follows: @memberType int, @color int, @preference int, @groupNumber int, @departmentNumber int This procedure will be passed -1 or higher for each parameter. A value of -1 means that the WHERE clause should not consider that parameter in the join/clause. If the value of the parameter is greater than -1, I need to consider the value in my WHERE clause. I would prefer to NOT use an IF-ELSE statement because it seems sloppy for this case. I saw this question here. However, it did not work for me. I think the reason why is because each of the columns in my table can have a NULL value. Someone pointed this scenario out in the fifth answer. That appears to be happening to me. Is there a slick approach to my question? Or do I just need to brute force it (I hope not :(). Thank you!

    Read the article

  • Advice on Minimizing Stored Procedure Parameters

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC Web Application that interacts with a SQL Server 2008 database via Entity Framework 4.0. On a particular page, i call a stored procedure in order to pull back some results based on selections on the UI. Now, the UI has around 20 different input selections, ranging from a textbox, dropdown list, checkboxes, etc. Each of those inputs are "grouped" into logical sections. Example: Search box : "Foo" Checkbox A1: ticked, Checkbox A2: unticked Dropdown A: option 3 selected Checkbox B1: ticked, Checkbox B2: ticked, Checkbox B3: unticked So i need to call the SPROC like this: exec SearchPage_FindResults @SearchQuery = 'Foo', @IncludeA1 = 1, @IncludeA2 = 0, @DropDownSelection = 3, @IncludeB1 = 1, @IncludeB2 = 1, @IncludeB3 = 0 The UI is not too important to this question - just wanted to give some perspective. Essentially, i'm pulling back results for a search query, filtering these results based on a bunch of (optional) selections a user can filter on. Now, My questions/queries: What's the best way to pass these parameters to the stored procedure? Are there any tricks/new ways (e.g SQL Server 2008) to do this? Special "table" parameters/arrays - can we pass through User-Defined-Types? Keep in mind im using Entity Framework 4.0 - but could always use classic ADO.NET for this if required. What about XML? What are the serialization/de-serialization costs here? Is it worth it? How about a parameter for each logical section? Comma-seperated perhaps? Just thinking out loud. This page is particulary important from a user point of view, and needs to perform really well. The stored procedure is already heavy in logic, so i want to minimize the performance implications - so keep that in mind. With that said - what is the best approach here?

    Read the article

  • A better way of getting a data table with various column types into string array

    - by Vlad
    This should be an easy one, looks like I got myself too confused. I get a table from a database, data ranges from varchar to int to Null values. Cheap and dirty way of converting this into a tab-delimited file that I already have is this (shrunken to preserve space, ugliness is kept on par with original): da.Fill(dt) ' da - DataAdapter ' ' dt - DataTable ' Dim lColumns As Long = dt.Columns.Count Dim arrColumns(dt.Columns.Count) As String Dim arrData(dt.Columns.Count) As Object Dim j As Long = 0 Dim arrData(dt.Columns.Count) As Object For i = 0 To dt.Rows.Count - 1 arrData = dt.Rows(i).ItemArray() For j = 0 To arrData.GetUpperBound(0) - 1 arrColumns(j) = arrData(j).ToString Next wrtOutput.WriteLine(String.Join(strFieldDelimiter, arrColumns)) Array.Clear(arrColumns, 0, arrColumns.GetLength(0)) Array.Clear(arrData, 0, arrData.GetLength(0)) Next Not only this is ugly and inefficient, it is also getting on my nerves. Besides, I want, if possible, to avoid the infamous double-loop through the table. I would really appreciate a clean and safe way of rewriting this piece. I like the approach that is used here - especially that is trying to solve the same problem that I have, but it crashes on me when I apply it to my case directly.

    Read the article

  • How can I exclude LEFT JOINed tables from TOP in SQL Server?

    - by Kalessin
    Let's say I have two tables of books and two tables of their corresponding editions. I have a query as follows: SELECT TOP 10 * FROM (SELECT hbID, hbTitle, hbPublisherID, hbPublishDate, hbedID, hbedDate FROM hardback LEFT JOIN hardbackEdition on hbID = hbedID UNION SELECT pbID, pbTitle, pbPublisher, pbPublishDate, pbedID, pbedDate FROM paperback Left JOIN paperbackEdition on pbID = pbedID ) books WHERE hbPublisherID = 7 ORDER BY hbPublishDate DESC If there are 5 editions of the first two hardback and/or paperback books, this query only returns two books. However, I want the TOP 10 to apply only to the number of actual book records returned. Is there a way I can select 10 actual books, and still get all of their associated edition records? In case it's relevant, I do not have database permissions to CREATE and DROP temporary tables. Thanks for reading! Update To clarify: The paperback table has an associated table of paperback editions. The hardback table has an associated table of hardback editions. The hardback and paperback tables are not related to each other except to the user who will (hopefully!) see them displayed together.

    Read the article

  • Get a unique data in a SQL query

    - by Jensen
    Hi, I've a database who contain some datas in that form: icon(name, size, tag) (myicon.png, 16, 'twitter') (myicon.png, 32, 'twitter') (myicon.png, 128, 'twitter') (myicon.png, 256, 'twitter') (anothericon.png, 32, 'facebook') (anothericon.png, 128, 'facebook') (anothericon.png, 256, 'facebook') So as you see it, the name field is not uniq I can have multiple icons with the same name and they are separated with the size field. Now in PHP I have a query that get ONE icon set, for example : $dbQueryIcons = mysql_query("SELECT * FROM pl_icon WHERE tag LIKE '%".$SEARCH_QUERY."%' GROUP BY name ORDER BY id DESC LIMIT ".$firstEntry.", ".$CONFIG['icon_per_page']."") or die(mysql_error()); With this example if $tag contain 'twitter' it will show ONLY the first SQL data entry with the tag 'twitter', so it will be : (myicon.png, 16, 'twitter') This is what I want, but I would prefer the '128' size by default. Is this possible to tell SQL to send me only the 128 size when existing and if not another size ? In an another question someone give me a solution with the GROUP BY but in this case that don't run because we have a GROUP BY name. And if I delete the GROUP BY, it show me all size of the same icons. Thanks !

    Read the article

  • Is it possible to auto update only selected properties on an existent entity object without touching the others

    - by LaserBeak
    Say I have a bunch of boolean properties on my entity class public bool isActive etc. Values which will be manipulated by setting check boxes in a web application. I will ONLY be posting back the one changed name/value pair and the primary key at a time, say { isActive : true , NewsPageID: 34 } and the default model binder will create a NewsPage object with only those two properties set. Now if I run the below code it will not only update the values for the properties that have been set on the NewsPage object created by the model binder but of course also attempt to null all the other non set values for the existent entity object because they are not set on NewsPage object created by the model binder. Is it possible to somehow tell entity framework not to look at the properties that are set to null and attempt to persist those changes back to the retrieved entity object and hence database ? Perhaps there's some code I can write that will only utilize the non-null values and their property names on the NewsPage object created by model binder and only attempt to update those particular properties ? [HttpPost] public PartialViewResult SaveNews(NewsPage Np) { Np.ModifyDate = DateTime.Now; _db.NewsPages.Attach(Np); _db.ObjectStateManager.ChangeObjectState(Np, System.Data.EntityState.Modified); _db.SaveChanges(); _db.Dispose(); return PartialView("MonthNewsData"); } I can of course do something like below, but I have a feeling it's not the optimal solution. Especially considering that I have like 6 boolean properties that I need to set. [HttpPost] public PartialViewResult SaveNews(int NewsPageID, bool isActive, bool isOnFrontPage) { if (isActive != null) { //Get entity and update this property } if (isOnFontPage != null) { //Get entity and update this property } }

    Read the article

  • How do I put my return data from an asmx into JSON?

    - by jphenow
    I want to return an array of javascript objects from my asp.net asmx file. ie. variable = [ { *value1*: 'value1', *value2*: 'value2', ..., }, { . . } ]; I seem have been having trouble reaching this. I'd put this into code but I've been hacking away at it so much it'd probably do more harm than good in having this answered. Basically I am using a web service to find names as people type the name. I'd use a regular text file or something but its a huge database that's always changing - and don't worry I've indexed the names so searching can be a little snappier - but I would really prefer to stick with this method and just figure out how to get usable JSON back to javascript. I've seen a few that sort of attempt to describe how one would approach this but I honestly think microsofts articles are damn near unreadable. Thanks in advance for assistance. EDIT: I'm using the $.ajax() function from jQuery - I've had it working but it seems like I was doing it in bad practice not returning and using actual JSON. Previously I'd take a string back and insert it into html to use the variable it set - very roundabout.

    Read the article

  • Recreating a workflow instance with the same instance id

    - by Miron Brezuleanu
    We have some objects that have an associated workflow instance. The objects are identified with a GUID, which is also the GUID of the workflow instance associated with the object. We need to restart (see NOTE 3 for the meaning of 'restart') the workflow instance if the workflow definition changed (there is no state in the workflow itself and it is written to support restarting in this manner). The restarting is performed by calling Terminate on the WorkflowInstance, then recreating the instance with the same GUID. The weird part is that this works every other attempt (odd attempts - the workflow is stopped, but for some reason doesn't restart, even attempt - the already terminated workflow is recreated and started successfully). While I admit that using 'second hand' GUIDs is a sign of extraordinary cheapness (and something we plan to change), I'm wondering why this isn't working. Any ideas? NOTES: The terminated workflow instance is passivated (waiting for a notification) at the time of the termination. The Terminate call successfully deletes the data persisted in the database for that instance. We're using 'restarting' with a meaning that's less common in the context of WF - not restarting a passivated instance, but force the workflow to start again from the beginning of its definition. Thanks!

    Read the article

  • MySql ODBC connection in VB6 on WinXP VERY slow. Other machines on same network are fast.

    - by Matthew
    Hi All, I have a VB6 application that has been performing very well. Recently, we upgraded our server to a Windows 2003 server. Migration of the databases and shares went well and we experienced no problems. Except one. And it has happened at multiple sites. I use the MySQL ODBC 5.1 connector to point to my MySQL database. On identical machines (as far as I can tell, they are client machines not ours), access to the DB is lightning fast on all but one computer. They use the same software and have the same connection strings. And I'm sure it's not the program, but the ODBC connection. When I press the 'Test Connection' button in the ODBC connection string window, it can take up to 10 seconds on the poorly performing machine to respond with a success. All the other computers are instantaneous. I have tried using ip address versus the machine name in the UDL, no change. I enabled option 256, which sped it up initially, but it's slow again. Most of the time on a restart the program will be fast for an hour or so then go slow again with the option 256 enabled. Frankly, I am out of ideas and willing to entertain any and all ideas or suggestions. This is getting pretty frustrating. Anyone ever experience anything like this?

    Read the article

  • Code coverage with phpunit; can't get to one place.

    - by HA17
    In the xdebug code coverage, it shows the line "return false;" (below "!$r") as not covered by my tests. But, the $sql is basically hard-coded. How do I get coverage on that? Do I overwrite "$table" somehow? Or kill the database server for this part of the test? I guess this is probably telling me I'm not writing my model very well, right? Because I can't test it well. How can I write this better? Since this one line is not covered, the whole method is not covered and the reports are off. I'm fairly new to phpunit. Thanks. public function retrieve_all() { $table = $this->tablename(); $sql = "SELECT t.* FROM `{$table}` as t"; $r = dbq ( $sql, 'read' ); if(!$r) { return false; } $ret = array (); while ( $rs = mysql_fetch_array ( $r, MYSQL_ASSOC ) ) { $ret[] = $rs; } return $ret; }

    Read the article

  • Stored procedure or function expects parameter which is not supplied

    - by user2920046
    I am trying to insert data into a SQL Server database by calling a stored procedure, but I am getting the error Procedure or function 'SHOWuser' expects parameter '@userID', which was not supplied. My stored procedure is called "SHOWuser". I have checked it thoroughly and no parameters is missing. My code is: public void SHOWuser(string userName, string password, string emailAddress, List preferences) { SqlConnection dbcon = new SqlConnection(conn); try { SqlCommand cmd = new SqlCommand(); cmd.Connection = dbcon; cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "SHOWuser"; cmd.Parameters.AddWithValue("@userName", userName); cmd.Parameters.AddWithValue("@password", password); cmd.Parameters.AddWithValue("@emailAddress", emailAddress); dbcon.Open(); int i = Convert.ToInt32(cmd.ExecuteScalar()); cmd.Parameters.Clear(); cmd.CommandText = "tbl_pref"; foreach (int preference in preferences) { cmd.Parameters.Clear(); cmd.Parameters.AddWithValue("@userID", Convert.ToInt32(i)); cmd.Parameters.AddWithValue("@preferenceID", Convert.ToInt32(preference)); cmd.ExecuteNonQuery(); } } catch (Exception) { throw; } finally { dbcon.Close(); } and the stored procedure is: ALTER PROCEDURE [dbo].[SHOWuser] -- Add the parameters for the stored procedure here ( @userName varchar(50), @password nvarchar(50), @emailAddress nvarchar(50) ) AS BEGIN INSERT INTO tbl_user(userName,password,emailAddress) values(@userName,@password,@emailAddress) select tbl_user.userID,tbl_user.userName,tbl_user.password,tbl_user.emailAddress, stuff((select ',' + preferenceName from tbl_pref_master inner join tbl_preferences on tbl_pref_master.preferenceID = tbl_preferences.preferenceID where tbl_preferences.userID=tbl_user.userID FOR XML PATH ('')),1,1,' ' ) AS Preferences from tbl_user SELECT SCOPE_IDENTITY(); END Pls help, Thankx in advance...

    Read the article

  • Android and fairly large SQLite datafiles

    - by SK9
    I'm starting an Android project, a port from an existing iPhone project I've completed. I have a fairly large read-only SQLite database, about 100Mb in all. It's called "mydata.sqlite". Where do I place this in my Eclipse workspace? It's too big for "assets". Next, how do I best get at the file? I would think to try (handling exceptions later) something like: SQLiteDatabase myDatabase = null; myDatabase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); But I would then need the path string myPath and since I don't know where to put the resource I don't know what this needs to be. Can I put "mydata.sqlite" into "res/raw" (once I create "raw" in Eclipse?) and then referene it as a resource with "R.raw.mydata"? I would very much appreciate some direct help here, rather than a reference to a tutorial. I have checked tons of these, including those that are already cited here on stackoverflow. I've also gone through the "Notepad" project in the Android developer documents. However these and the documentation typically consider only new, empty or small databases. This should be a simple thing and given the time I've spent already it is perhaps easier to ask. Thanking you kindly in advance for your assistance.

    Read the article

  • .Net Hash Codes no longer persistent?

    - by RobV
    I have an API where various types have custom hash codes. These hash codes are based on getting the hash of a string representation of the object in question. Various salting techniques are used so that as far as possible Hash Codes do not collide and that Objects of different types with equivalent string representations have different Hash Codes. Obviously since the Hash Codes are based on strings there are some collisions (infinite strings vs the limited range of 32 bit integers). I use hashes based on string representations since I need the hashes to persist over sessions and particularly for use in database storage of objects. Suddenly today my code has started generating different hash codes for Objects which is breaking all kinds of things. It was working earlier today and I haven't touched any of the code involved in Hash Code generation. I'm aware that the .Net documentation allows for implementation of hash codes between .Net framework versions to change (and between 32 and 64 bit versions) but I haven't changed the framework version and there has been no framework updates recently as far as I can remember Any ideas because this seems really weird? Edit Hash Codes are generated like follows: //Compute Hash Code this._hashcode = (this._nodetype + this.ToString() + PlainLiteralHashCodeSalt).GetHashCode();

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Change HTML DropDown Default Value with a MySQL value

    - by fzr11017
    I'm working on a profile page, where a registered user can update their information. Because the user has already submitted their information, I would like their information from the database to populate my HTML form. Within PHP, I'm creating the HTML form with the values filled in. However, I've tried creating an IF statement to determine whether an option is selected as the default value. Right now, my website is giving me a default value of the last option, Undeclared. Therefore, I'm not sure if all IF statements are evaluation as true, or if it is simply skipping to selected=selected. Here is my HTML, which is currently embedded with PHP(): <select name="Major"> <option if($row[Major] == Accounting){ selected="selected"}>Accounting</option> <option if($row[Major] == Business Honors Program){ selected="selected"}>Business Honors Program</option> <option if($row[Major] == Engineering Route to Business){ selected="selected"}>Engineering Route to Business</option> <option if($row[Major] == Finance){ selected="selected"}>Finance</option> <option if($row[Major] == International Business){ selected="selected"}>International Business</option> <option if($row[Major] == Management){ selected="selected"}>Management</option> <option if($row[Major] == Management Information Systems){ selected="selected"}>Management Information Systems</option> <option if($row[Major] == Marketing){ selected="selected"}>Marketing</option> <option if($row[Major] == MPA){ selected="selected"}>MPA</option> <option if($row[Major] == Supply Chain Management){ selected="selected"}>Supply Chain Management</option> <option if($row[Major] == Undeclared){ selected="selected"}>Undeclared</option> </select>

    Read the article

  • C# SQL Data Adapter Fill on existing typed Dataset

    - by René
    I have an option to choose between local based data storing (xml file) or SQL Server based. I already created a long time ago a typed dataset for my application to save data local in the xml file. Now, I have a bool that changes between Server based version and local version. If true my application get the data from the SQL Server. I'm not sure but It seems that Sql Adapter's Fill Method can't fill the Data in my existing schema SqlCommand cmd = new SqlCommand("Select * FROM dbo.Categories WHERE CatUserId = 1", _connection); cmd.CommandType = CommandType.Text; _sqlAdapter = new SqlDataAdapter(cmd); _sqlAdapter.TableMappings.Add("Categories", "dbo.Categories"); _sqlAdapter.Fill(Program.Dataset); This should fill my data from dbo.Categories to Categories (in my local, typed dataset). but it doesn't. It creates a new table with the name "Table". It looks like it can't handle the existing schema. I can't figure it out. Where is the problem? btw. of course the database request I do isn't very useful that way. It's just a simplified version for testing...

    Read the article

  • Windows Azure Worldwide availability

    - by Insomniac
    Hi, I've been reviewing Windows Azure platform for some time, and can't find answer to one very important question. If I deploy my application within a cloud, how it will be reached from different places worldwide? For example if I have a web application with a database and want it to be accessible to users in UK, US, China and etc. Can I be sure that any user in the world will get almost the same request processing time? I think of it this way. 1. User sends request (navigates in browser to my web site) 2. This request gets in a cloud in a nearest location (closest to user MS Data Center?) 3. It is processed by an instance of my web application (in nearest location, with request to my centralized DB which can be far away but SQL request goes via MS internal network, which I believe should be very fast). 4. Response sent to user. Please let me know if I'm wrong. Thanks.

    Read the article

  • Access problems with IIS 7 and a WCF service

    - by Steve
    I have a Silverlight app that calls a WCF service, the service calls some stored procedures in an SQL db using Visual Studio 2008's Link to SQL class and returns the information to whatever called it. I have set up the compiled project (website with embedded app and the WCF service) on an remote IIS 7 server. I recompiled my local copy to use the WCF service that is now hosted on the IIS box and not the one on the local dev server that Visual Studio provides, if I use the local version of the website (hosted on the dev server, and using the remote SCF service) it is able to make calls it needs and display the information. However, if I use the website that is being hosted by the remote IIS server, the app will not get the information it needs from the service. On the IIS server I have the application pool and the website running under my credentials, which have access to the database. Users connecting to the webpage use anonymous authentication. Any ideas as to why I can only access the service when running from the dev server and not through the remotely hosted webpage are appreciated. If anything needs clarification, please ask.

    Read the article

  • connecting to mysql from excel: ODBC driver does not support the requested properties.

    - by every_answer_gets_a_point
    i am trying to add data to mysql from excel. i am getting the above error on this line: rs.Open strSQL, oConn, adOpenDynamic, adLockOptimistic here is my code: Dim oConn As ADODB.Connection Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=some_pass;" & _ "Option=3" End Sub Function esc(txt As String) esc = Trim(Replace(txt, "'", "\'")) End Function Private Sub InsertData() Dim rs As ADODB.Recordset Set rs = New ADODB.Recordset ConnectDB With wsBooks For rowCursor = 2 To 11 strSQL = "INSERT INTO tutorial (author, title, price) " & _ "VALUES ('" & esc(.Cells(rowCursor, 1)) & "', " & _ "'" & esc(.Cells(rowCursor, 2)) & "', " & _ esc(.Cells(rowCursor, 3)) & ")" rs.Open strSQL, oConn, adOpenDynamic, adLockOptimistic Next End With End Sub whats wrong with rs.Open strSQL, oConn, adOpenDynamic, adLockOptimistic ? why am i getting the odbc error?

    Read the article

  • hibernate empty collection in component

    - by Jurgen H
    I have a component mapped using Hibernate. If all fields in the component in the database are null, the component itself is set to null by hibernate. This is the expected behavior and also what I need. The problem I have, is that when I add a bag to that component, the bag is initialized to an empty list. This means the component has a non null value... resulting in the component being created. Any idea how to fix this? <class name="foo.bar.Entity" table="Entity"> <id name="id" column="id"> <generator class="native" /> </id> <property name="departure" column="departure_time" /> <property name="arrival" column="arrival_time" /> <component name="statistics"> <bag name="linkStatistics" lazy="false" cascade="all" > <key column="entity_id" not-null="true" /> <one-to-many class="foo.bar.LinkStatistics" /> </bag> <property name="loggedTime" column="logged_time" /> ... </component> A criteria with Restirctions.isNull("statistics") does return the expected values.

    Read the article

  • How would I authenticate against a local windows user on another machine in an ASP.NET application?

    - by Daniel Chambers
    In my ASP.NET application, I need to be able to authenticate/authorise against local Windows users/groups (ie. not Active Directory) on a different machine, as well as be able to change the passwords of said remote local Windows accounts. Yes, I know Active Directory is built for this sort of thing, but unfortunately the higher ups have decreed it needs to be done this way (so authentication against users in a database is out as well). I've tried using DirectoryEntry and WinNT like so: DirectoryEntry user = new DirectoryEntry(String.Format("WinNT://{0}/{1},User", serverName, username), username, password, AuthenticationTypes.Secure) but this results in an exception when you try to log in more than one user: Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again. I've tried making sure my DirectoryEntries are used inside a using block, so they're disposed properly, but this doesn't seem to fix the issue. Plus, even if that did work it is possible that two users could hit that line of code concurrently and therefore try to create multiple connections, so it would be fragile anyway. Is there a better way to authenticate against local Windows accounts on a remote machine, authorise against their groups, and change their passwords? Thanks for your help in advance.

    Read the article

  • Subquery with multiple results combined into a single field?

    - by Todd
    Assume I have these tables, from which i need to display search results in a browser: Table: Containers id | name 1 Big Box 2 Grocery Bag 3 Envelope 4 Zip Lock Table: Sale id | date | containerid 1 20100101 1 2 20100102 2 3 20091201 3 4 20091115 4 Table: Items id | name | saleid 1 Barbie Doll 1 2 Coin 3 3 Pop-Top 4 4 Barbie Doll 2 5 Coin 4 I need output that looks like this: itemid itemname saleids saledates containerids containertypes 1 Barbie Doll 1,2 20100101,20100102 1,2 Big Box, Grocery Bag 2 Coin 3,4 20091201,20091115 3,4 Envelope, Zip Lock 3 Pop-Top 4 20091115 4 Zip Lock The important part is that each item type only gets one record/row in the return on the screen. I accomplished this in the past by returning multiple rows of the same item and using a scripting language to limit the output. However, this makes the ui overly complicated and loopy. So, I'm hoping I can get the database to spit out only as many records as there are rows to display. This example may be a bit extreme because of the 2 joins needed to get to the container from the item (through the sale table). I'd be happy for just an example query that outputs this: itemid itemname saleids saledates 1 Barbie Doll 1,2 20100101,20100102 2 Coin 3,4 20091201,20091115 3 Pop-Top 4 20091115 I can only return a single result in a subquery, so I'm not sure how to do this.

    Read the article

  • PHP | SQL syntax error when inserting array

    - by Philip
    Hi guys, I am having some trouble inserting an array into the sql database. my error is as follows: Unable to add : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '06:45:23,i want to leave a comment)' at line 1 My query var_dump is: string(136) "INSERT INTO news_comments (news_id,comment_by,comment_date,comment) VALUES (17263,Philip,2010-05-11 06:45:23,i want to leave a comment)" My question is how can i add an empty value to id as it is the primary key and not news_id my insert function looks like this: function insertQuery($tbl, &$data) { global $mysqli; $_SESSION['errors'] = array(); require_once '../config/mysqli.php'; $query = "INSERT INTO $tbl (".implode(',',array_keys($data)).") VALUES (".implode(',',array_values($data)).")"; var_dump($query); if($result = mysqli_query($mysqli, $query)) { //$id = mysqli_insert_id($mysqli); print 'Very well done sir!'; } else { array_push($_SESSION['errors'], 'Unable to add : ' . mysqli_error($mysqli)); } } Note: arrays are not my strong point so i may be using them in-correctly!

    Read the article

  • saveall not saving associated data

    - by junior29101
    I'm having a problem trying to save (update) some associated data. I've read about a million google returns, but nothing seems to be the solution. I'm at my wit's end and hope some kind soul here can help. I'm using 1.3.0-RC4, my database is in InnoDB. Course has many course_tees CourseTee belongs to course My controller function is pretty simple (I've made it as simple as possible): if(!empty($this-data)) $this-Course-saveAll($this-data); I've tried a lot of different variations of that $this-data['Course'], save($this-data), etc without luck. It saves the Course info, but not the CourseTee stuff. I don't get an error message. Since I don't know how many tees any given course will have, I generate the form inputs dynamically in a loop. $form-input('CourseTee.'.$i.'.teeName', array('error' = false, 'label' = false, 'value'=$data['course_tees'][$i]['teeName'])) The course inputs are simpler: $form-input('Course.hcp'.$j, array('error' = false, 'label' = false, 'class' = 'form_small_w', 'value'=$data['Course']['hcp'.$j])) And this is how my data is formatted: Array ( [Course] = Array ( [id] = 1028476 ... ) [CourseTee] = Array ( [0] = Array ( [key] = 636 [courseid] = 1028476 ... ) [1] = Array ( [key] = 637 [courseid] = 1028476 ... ) ... ) )

    Read the article

< Previous Page | 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232  | Next Page >