Search Results

Search found 22526 results on 902 pages for 'multiple databases'.

Page 429/902 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • Json HTTP Module stream issue

    - by Justin
    Hey, I have an HTTP Module that I use to clean up the JSON returned by my web service (see http://www.codeproject.com/KB/webservices/ASPNET_JSONP.aspx?msg=3400287#xx3400287xx for an example of this.) Basically it relates to calling cross-domain JSON web services from javascript. There is this JsonHttpModule which uses a JsonResponseFilter Stream class to write out the JSON and the overloaded Write method is supposed to wrap the name of the callback function around the JSON, otherwise the JSON errors out as needing a label. However, if the JSON is really long, the Write method in the Stream class is called multiple times, causing the callback function to incorrectly get inserted midway through the JSON. Is there a way in the Stream class to wrap the callback function around the stream at the end or to specify that it write all of the JSON in 1 Write method instead of in chunks?? Here's where it calls the JsonResponseFilter in the JsonHttpModule: public void OnReleaseRequestState(object sender, EventArgs e) { HttpApplication app = (HttpApplication)sender; if (!_Apply(app.Context.Request)) return; // apply response filter to conform to JSONP app.Context.Response.Filter = new JsonResponseFilter(app.Context.Response.Filter, app.Context); } Here's the Write method in the JsonResponseFilter Stream class that gets called multiple times: public override void Write(byte[] buffer, int offset, int count) { var b1 = Encoding.UTF8.GetBytes(_context.Request.Params["callback"] + "("); _responseStream.Write(b1, 0, b1.Length); _responseStream.Write(buffer, offset, count); var b2 = Encoding.UTF8.GetBytes(");"); _responseStream.Write(b2, 0, b2.Length); } Thanks for any help! Justin

    Read the article

  • Is there a simple way to backup and restore all Microsoft SQL Server database objects related to a p

    - by Nathan Hartley
    I would like to backup, not only the databases that belong to a particular application living on a shared server, but also, those things that get stored outside of the database; the server accounts, jobs, maintenance plans and whatever else I can't think of at the moment. This backup should be complete enough that it's corresponding restore will recreate the entire application on a different SQL server. This seems like a problem others must have dealt with in the past. So before I embark on creating custom Powershell scripts for each application, I have come to ask you... Can you help?

    Read the article

  • How to config mysql-server for heavy load

    - by Rasmus
    Im in the process of setting up a new database server. I have been running a few mysql database servers before and it has been working okay. But i would like to hear the recommended setup for my server. For example, what should i set the max connection, query_cache_size, table_cache and so on. I have arround 4-600 per second: Open tables: 112 Queries per second avg: 430.386. The server i am setting it up on have the following configuration: Linux version 2.6.32-5-amd64 (Debian 2.6.32-41squeeze2) 2x Intel Xeon X3440 @ 2.53GHz 4GB Ram /, /boot, /tmp etc on Software RAID1, 2x 7200RPM SATA Data location on Software RAID0, 2x7200RPM SATA Im am going to place the mysql databases on the RAID0. Am im missing anything? Let me know! Thanks in advance, im looking forward to hearing from you :-) /Rasmus

    Read the article

  • How can I do an "insert" statement from MS SQL -> MYSQL using linked servers

    - by bvandrunen
    Is it possible to do an "insert" statement through linked servers. I know it is possible by using MSDTC...but does this work between MS SQL and MYSQL? Any help would be greatly appreciated. As of right now...I can update and select between the 2 databases but it gives me an error when I try to run an insert statement. OLE DB provider "MSDASQL" for linked server "******" returned message "Query-based insertion or updating of BLOB values is not supported.". Msg 7343, Level 16, State 2, Line 1 The OLE DB provider "MSDASQL" for linked server "******" could not INSERT INTO table "[*********]...[******_options]". Location: memilb.cpp:1493 Expression: (*ppilb)-m_cRef == 0 SPID: 76 Process ID: 1644

    Read the article

  • Database modularity with EBS volumes

    - by Eclyps19
    I would like to add modularity to my websites on EC2 instances by encapsulating the site files and the mysql files in their own EBS volumes. The end result that I'm going for is the ability to quickly mount a volume or two to different servers running the same AMI (for testing/development/emergency maintenance, etc), as well as maintain separate snapshots of each. I'm able to do this fairly easily with a single database by symlinking my mounted database EBS to the appropriate places (/var/lib/mysql, /etc/my.cnf, /var/log/mysqld.log), but I'm not sure if it would even be possible be possible to have multiple databases on different EBS volumes running concurrently. Example: /website1/www.website.com /database1/ /website2/www.otherwebsite.com /database2/ Could anybody shed some light on this for me? Is it possible? Is it a bad idea? Thanks.

    Read the article

  • Which is better Java programming practice for looping up to an int value: a converted for-each loop

    - by Arvanem
    Hi folks, Given the need to loop up to an arbitrary int value, is it better programming practice to convert the value into an array and for-each the array, or just use a traditional for loop? FYI, I am calculating the number of 5 and 6 results ("hits") in multiple throws of 6-sided dice. My arbitrary int value is the dicePool which represents the number of multiple throws. As I understand it, there are two options: Convert the dicePool into an array and for-each the array: public int calcHits(int dicePool) { int[] dp = new int[dicePool]; for (Integer a : dp) { // call throwDice method } } Use a traditional for loop. public int calcHits(int dicePool) { for (int i = 0; i < dicePool; i++) { // call throwDice method } } I apologise for the poor presentation of the code above (for some reason the code button on the Ask Question page is not doing what it should). My view is that option 1 is clumsy code and involves unnecessary creation of an array, even though the for-each loop is more efficient than the traditional for loop in Option 2. Thanks in advance for any suggestions you might have.

    Read the article

  • Javascript : Submitting a form outside the actual form doesn't work

    - by Ben Fransen
    Hello all, I'm trying to achieve a fairly easy triggering mechanism for deleting multiple items from a tablegrid. If a user has enough access he/she is able to delete multiple users from a table. In the table I have set up checkboxes, one per row/user. The name of the checkboxes is UsersToDeletep[], and the value per row is the unique UserID. When a user clicks the button 'Delete selected users' a simple validation takes place to make sure at least one checkbox is selected. After that I call my simple function Submit(form). The function works perfectly when called within the form-tags, where I also use it to delete a single user. The function: function Submit(form) { document.forms[form].submit(); } I've also alerted document.forms[form]. The result is, as expected [object HTMLFormElement]. But for some reason the form just won't submit and a pagereload takes place. I'm a bit confused and can't seem to figure out what I'm doing wrong. Can anyone point me in the right direction? Thanks in advance! Ben

    Read the article

  • Architectural conundrum

    - by Dejan
    The worst thing when working on a one man project is the lack of input that you usually get from your coworkers. And because of the lack of that you tend to make obvious mistakes. After going down that road for some time I would need some help from the community. I started a little home-brew project that should turn into a portal of some sorts. And the main thing that is bothering me is the persistence layer that i have concocted. It should be completely separated from the presentation layer for starters and a OR mapper is also somewhere. This is because I have multiple data stores that have to be used. So the base idea was that the individual "repositories" operate each on their individual database and that the business layer then aggregates the business objects which are then transformed in the presentation layer into view objects. The main problem I face is the following: Multiple classes for the same concept - There is a DAL representation of a user and BL representation of user and a view representation of a user. I can handle the transformation with a tool but is this really the right way. I mean they are all nicely separated, but the overhead is quite something. What do you think? Am I going too deep into the separation of concern rabbit hole or is this still normal?

    Read the article

  • How to terminate all [grand]child processes using C# on WXP (and newer MSWindows)

    - by NVRAM
    Question: How can I determine all processes in the child's Process Tree to kill them? I have an application, written in C# that will: Get a set of data from the server, Spawn a 3rd party utility to process the data, then Return the results to the server. This is working fine. But since a run consumes a lot of CPU and may take as long as an hour, I want to add the ability to have my app terminate its child processes. Some issues that make the simple solutions I've found elsewhere are: My app's child process "A" (InstallAnywhere EXE I think) spawns the real processing app "B" (a java.exe), which in turns spawns more children "C1".."Cn" (most of which are also written in Java). There will likely be multiple copies of my application (and hence, multiple sets of its children) running on the same machine. The child process is not in my control so there might be some "D" processes in the future. My application must run on 32-bit and 64-bit versions of MSWindows. On the plus side there is no issue of data loss, a "clean" shutdown doesn't matter as long as the processes end fairly quickly.

    Read the article

  • Drupal: Template Files, Modules and Content Types for Advanced Theme

    - by theandym
    Intro I am in the process of trying to convert my first HTML/CSS design into a theme for Drupal. I have used ModX for quite a few designs and appreciate the ability to create different page templates and custom variables to be assigned to those templates. However I seem to be having some issues making the transition. The site I am working on theming in Drupal is for a real estate agent. Each page/section will have a different set of content associated with it and will need to display only that content. For example, there will be a page for current listings, each of which will be formatted by a custom content type. However, when I call the content on the home page (or on other pages) I do not want to see this listing data. Layout The layout of the site and the regions associated with each page/section is as follows: Home Spotlight Featured 1 Featured 2 About Spotlight Bios - Profiles of each agent (each will be a node with name, contact info, pic, etc) listed on the page; multiple nodes listed Sidebar Listings Spotlight Listings - Profiles of properties (each will be a node with locations, basic info, pic, etc) listed on the page; multiple nodes listed Sidebar Services Spotlight Content - general paragraph text area Sidebar News/Blog News/Blog Items - List of stories with summaries and links to full article Sidebar Each page/section will use the same header and footer. Issue I have done some reading on Drupal, custom content types (and CCK), Views, and Pathauto. However I have not been able to get a clear picture of how to put it all together to accomplish what I am attempting. What I really would like to know is which modules to use, how best to use them, which elements I need to use where, and what template files I should be using to theme the elements I need to use. Any help or reference to useful resources would be much appreciated.

    Read the article

  • How to move list items from two lists (from different areas) to one specific list by clicking the li

    - by pschorr
    I've been having some issues setting up a list of items and moving them between each other. Is it possible to have multiple lists, with a link inside each item to move it to a specified list? i.e. List of actions with a plus next to the action, click the plus and move to a specified box? Furthermore, would it be possible to then load a lightbox (code would have to be inline I'd guess) from that page and move those names to the specific list as well? Example Images Thanks much! More broad view of my efforts so far... The initial issue being that I could not use listboxes due to their being rendered natively inside each individual browser. Through listboxes I was able to set this up, but with a trigger via the code below (found on stackoverflow). While it gave me partial functionality it did not get the target I was looking for. document.getElementById('moveTrigger').onclick = function() { var listTwo = document.getElementById('secondList'); var options = document.getElementById('firstList').getElementsByTagName('option'); while(options.length != 0) { listTwo.appendChild(options[0]); } } I then moved onto jqueryui's sortable and it's ability to connect multiple, and most important, style-able lists and to be dragged between each other. This works for the side by side tables, but it does not offer the usability I was looking for overall. So, I've come to where I'm unsure as to where to move forward. While I can get around PHP, I wouldn't know where to start with this one personally. I'm open to any and all options! Thanks much!

    Read the article

  • Factory Girl: Automatically assigning parent objects

    - by Ben Scheirman
    I'm just getting into Factory Girl and I am running into a difficulty that I'm sure should be much easier. I just couldn't twist the documentation into a working example. Assume I have the following models: class League < ActiveRecord::Base has_many :teams end class Team < ActiveRecord::Base belongs_to :league has_many :players end class Player < ActiveRecord::Base belongs_to :team end What I want to do is this: team = Factory.build(:team_with_players) and have it build up a bunch of players for me. I tried this: Factory.define :team_with_players, :class => :team do |t| t.sequence {|n| "team-#{n}" } t.players {|p| 25.times {Factory.build(:player, :team => t)} } end But this fails on the :team=>t section, because t isn't really a Team, it's a Factory::Proxy::Builder. I have to have a team assigned to a player. In some cases I want to build up a League and have it do a similar thing, creating multiple teams with multiple players. What am I missing?

    Read the article

  • remove specific values from multi value dictionary

    - by Anthony
    I've seen posts here on how to make a dictionary that has multiple values per key, like one of the solutions presented in this link: Multi Value Dictionary it seems that I have to use a List< as the value for the keys, so that a key can store multiple values. the solution in the link is fine if you want to add values. But my problem now is how to remove specific values from a single key. I have this code for adding values to a dictionary: private Dictionary<TKey, List<TValue>> mEventDict; // this is for initializing the dictionary public void Subscribe(eVtEvtId inEvent, VtEvtDelegate inCallbackMethod) { if (mEventDict.ContainsKey(inEvent)) { mEventDict[inEvent].Add(inCallbackMethod); } else { mEventDict.Add(inEvent, new List<TValue>() { v }); } } // this is for adding values to the dictionary. // if the "key" (inEvent) is not yet present in the dictionary, // the key will be added first before the value my problem now is removing a specific value from a key. I have this code: public void Unsubscribe(eVtEvtId inEvent, VtEvtDelegate inCallbackMethod) { try { mEventDict[inEvent].Remove(inCallbackMethod); } catch (ArgumentNullException) { MessageBox.Show("The event is not yet present in the dictionary"); } } basically, what I did is just replace the Add() with Remove() . Will this work? Also, if you have any problems or questions with the code (initialization, etc.), feel free to ask. Thanks for the advice.

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • Direct DB to Web Server connection

    - by Joel Coel
    I have a database server sitting right underneath a virtual machine host server in the rack, and this vm host is primarily responsible for servers hosting a couple different web sites and app servers that all talk to databases on the other server. Right now both servers are connected to the same switch, and I'm pretty happy with the pathing. However, both servers also have an unused network port. I wondering about the potential benefits of using a short crossover or normal+auto mdix network cable to connect these two servers together directly. Is this a good idea, or would I be doing something that won't show much benefit and is just likely to trip up a future admin who's not looking for this? The biggest weakness I can see right now is that this would likely require a code change for each vm app to point to the new IP of the database server on this private little network, and if I have a problem with the virtual machine host and have to spin up it's guests elsewhere while I fix it I'll have to change this back before things will work.

    Read the article

  • C# Execute Method (with Parameters) with ThreadPool

    - by washtik
    We have the following piece of code (idea for this code was found on this website) which will spawn new threads for the method "Do_SomeWork()". This enables us to run the method multiple times asynchronously. The code is: var numThreads = 20; var toProcess = numThreads; var resetEvent = new ManualResetEvent(false); for (var i = 0; i < numThreads; i++) { new Thread(delegate() { Do_SomeWork(Parameter1, Parameter2, Parameter3); if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }).Start(); } resetEvent.WaitOne(); However we would like to make use of ThreadPool rather than create our own new threads which can be detrimental to performance. The question is how can we modify the above code to make use of ThreadPool keeping in mind that the method "Do_SomeWork" takes multiple parameters and also has a return type (i.e. method is not void). Also, this is C# 2.0.

    Read the article

  • Repartition Ubuntu by command line?

    - by DisgruntledGoat
    On my server the filesystem includes these partitions: Filesystem Size Used Avail Use% Mounted on /dev/sda6 4.6G 929M 3.5G 21% / /dev/sda5 76M 20M 53M 27% /boot /dev/sda8 449G 199M 426G 1% /home /dev/sda7 4.6G 4.4G 0 100% /var (Output from df -ah) I'm storing the web sites and databases under /var and as you can see it's got full. The /home folder just has basic user directories and nothing else so I'd like to repartition the server so that /dev/sda8 is about 5GB, with the rest going to dev/sda7. What's the easiest way to do this via command line (i.e. SSH)?

    Read the article

  • SQL Server 2008 Replication Promotion

    - by Stefan Mai
    I have a 4 node cluster, 1 subscriber and 3 publishers, all running SQL Server 2008 R2 Enterprise. The intention is that if the subscriber goes down, we can use one of the publishers to quickly build up its replacement. Our testing reveals a problem though: the subcriber databases all have Not For Replication set to Yes on the identity columns so that they can maintain the identity set in the subscriber. This causes a problem when they become subscribers because now we don't have identity insert functionality: we get a primary key error. Any way to "promote" a subscriber to publisher?

    Read the article

  • SQL Server Compact timed out waiting for a lock

    - by jankhana
    Hi all, I'm having an application in that i use Sql Compact 3.5 with VS2008. I'm running multiple threads in my application which contacts the compact database and accesses the row. It selects and deletes those rows in a fashion i.e selecting and giving to the application 5 rows and deleting those rows from the table. It works great with a single thread but if i use multiple threads i.e if 3 or more threads are running I get very often the TimeOut Error!!! I have increased the Time out property in the connection string but it didn't give me expected result. The error log is as follow: SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 5,Thread id = 4204,Process id = 4808,Table name = XXX,Conflict type = x lock (s blocks),Resource = TAB ] The Query that I use to retrieve is as follows: " select Top(5) * from TableName order by id; delete from TableName where id in(select top(5) id from TableName order by id); " Is there any way by which we can avoid this Time Out exception??????? The above query I un as a transaction in VS2008 one using SQLCECommand and the other using SqlCEDataAdapter. Any Idea!!!!!! Reply

    Read the article

  • MySql Query lag time / deadlock?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • ado.net Concurrency violation

    - by Bicubic
    My first time using ADO.net. Trying to make database of Users. First I populate my DataSet: adapter.AcceptChangesDuringFill = true; adapter.AcceptChangesDuringUpdate = true; adapter.Fill(dataset); To create a user: User user = new User(); user.datarow = dataset.Users.NewUsersRow(); user.Name = username; user.PasswordHash = GetHash(password); user.Rights = UserRights.None; users.Add(user); dataset.Users.AddUsersRow(user.datarow); adapter.Update(dataset); When a user property is modified: adapter.Update(dataset); Creation by itself is fine. If I take an existing user and make multiple changes, fine. Multiple creations in a row, fine. Creation followed by a property change, I get this: "Concurrency violation: the UpdateCommand affected 0 of the expected 1 records." Any ideas?

    Read the article

  • Version control and branching when using Oracle

    - by Ed Woodcock
    Hi folks: At work we're using Oracle and C#/ASP.net to handle a customer's website, this site is very large-scale so the database is very large. We use Perforce for our version control, and tack create or replace scripts to FogBugz cases whenever a database change, which has been fine until now, as we are now at a point where five developers are working on five expansions for the system, each on a seperate Perforce branch. Unfortunately, we cannot get duplicate databases, due to the database size, so everyone is still working from the same one. This is obviously a cause of problems: only ten minutes ago we had a bit of an issue where a stored procedure change for a branch propagated over to the Pre-Production server and caused a large number of crashes for the testers. Ideally, we would like a way to track these changes without having to manually keep track of them through FogBugz. My question is: how do you lot handle this situation? I'm sure there must be a good way by now to handle versioning, or at least tracking changes, in an Oracle database.

    Read the article

  • Migrate reports from MS Access to OOo Base

    - by John Gardeniers
    I'm currently looking at upgrading our office machines from Office XP to Office 2010. For most users the standard edition is fine but just a few of us use Access. There are only a couple of standalone Access databases but the program is used fairly extensively (mostly by myself) as a front end to MySQL. As the cost different between standard and pro versions of Office 2010 is about $170 (AUD) I'm looking at possible alternatives to Access. I'm no huge fan of Open Office but could be convinced to use it if I can find a way to migrate the many reports we currently have in Access. The data is not a problem. So far I've found nothing to suggest this is even possible/practical but perhaps someone here knows otherwise. I'm also open to suggestions for other alternatives to Access but it must be able to produce flexible reports easily. That is the one real strength of Access in my view. Because of its subjective nature I'm making this community wiki.

    Read the article

  • Windows 2003, MySQL 5.0.24, Windows Updates causes problems with MySQL?

    - by Alessandro
    Hi, our Windows 2003 webserver has 2 GB of ram and MySQL v. 5.0.24-community-nt. Maybe due to Windows Updates (is it possible??) I've problems with MySQL databases. I should restart everyday IIS services. Events: "Changed limits: max_open_files: 2048 max_connections: 800 table_cache: 619" "Do you already have another mysqld server running on port: 3306 ?" "Can't connect to MySQL server on 'localhost'" 10055 Should I increase the innodb_buffer_pool_size, from 250M to 500M? Or/and? Thanks

    Read the article

  • Sql Server differential backup : Simple vs Full recovery model

    - by MaxiWheat
    I need to better understand the backup process under SQL Server 2008. Since drive space is a kind of matter for us and we want to have a better disaster recovery solution, I decided that we will implement differential backups throughout the day (every hour). Am I right to think that if I keep the recovery model of my databases to Simple, the differential backup will be almost the same size as Full Backup (too big to make one every hour) ? I already tried to switch to Full recovery and it seemed to have fixed the issue (differential backups were way smaller). I heard that the recovery model must be set to Full to use Log backups (to the minute recovery etc., but we don't need that) but never about differential backups. So, is the recovery model really having an impact on differential backups or am I missing something ? Thank you

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >