Search Results

Search found 21336 results on 854 pages for 'db api'.

Page 362/854 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • Is It possible to use the second part of this code for repository patterns and generics

    - by newToCSharp
    Is there any issues in using version 2,to get the same results as version 1. Or is this just bad coding. Any Ideas public class Customer { public int CustomerID { get; set; } public string EmailAddress { get; set; } int Age { get; set; } } public interface ICustomer { void AddNewCustomer(Customer Customer); void AddNewCustomer(string EmailAddress, int Age); void RemoveCustomer(Customer Customer); } public class BALCustomer { private readonly ICustomer dalCustomer; public BALCustomer(ICustomer dalCustomer) { this.dalCustomer = dalCustomer; } public void Add_A_New_Customer(Customer Customer) { dalCustomer.AddNewCustomer(Customer); } public void Remove_A_Existing_Customer(Customer Customer) { dalCustomer.RemoveCustomer(Customer); } } public class CustomerDataAccess : ICustomer { public void AddNewCustomer(Customer Customer) { // MAKE DB CONNECTION AND EXECUTE throw new NotImplementedException(); } public void AddNewCustomer(string EmailAddress, int Age) { // MAKE DB CONNECTION AND EXECUTE throw new NotImplementedException(); } public void RemoveCustomer(Customer Customer) { // MAKE DB CONNECTION AND EXECUTE throw new NotImplementedException(); } } // VERSION 2 public class Customer_New : DataRespository<CustomerDataAccess> { public int CustomerID { get; set; } public string EmailAddress { get; set; } public int Age { get; set; } } public class DataRespository<T> where T:class,new() { private T item = new T(); public T Execute { get { return item; } set { item = value; } } public void Update() { //TO BE CODED } public void Save() { //TO BE CODED } public void Remove() { //TO BE CODED } } class Program { static void Main(string[] args) { Customer_New cus = new Customer_New() { Age = 10, EmailAddress = "[email protected]" }; cus.Save(); cus.Execute.RemoveCustomer(new Customer()); // Repository Version Customer customer = new Customer() { EmailAddress = "[email protected]", CustomerID = 10 }; BALCustomer bal = new BALCustomer(new CustomerDataAccess()); bal.Add_A_New_Customer(customer); } } }

    Read the article

  • Sortable with scriptaculous problems

    - by user195257
    hello, Im following a few tutorials to sort a list, but i can't get the DB to update. The drag drop side of things is working, also, i javascript alert() the serialize list onUpdate and the order is printed out as follows: images_list[]=20&images_list[]=19 etc... So the sorting and dragging is working fine, i just cant get the database to update, this is my code. <script type="text/javascript"> Sortable.create("images_list", { onUpdate: function() { new Ajax.Request("processor.php", { method: "post", parameters: { data: Sortable.serialize("images_list") } }); } }); processor.php code: //Connect to DB require_once('connect.php'); parse_str($_POST['data']); for ($i = 0; $i < count($images_list); $i++) { $id = $images_list[$i]; mysql_query("UPDATE `images` SET `ranking` = '$i' WHERE `id` = '$id'"); } Any ideas would be great, thankyou!

    Read the article

  • How should I store Dynamically Changing Data into Server Cache?

    - by Scott
    Hey all, EDIT: Purpose of this Website: Its called Utopiapimp.com. It is a third party utility for a game called utopia-game.com. The site currently has over 12k users to it an I run the site. The game is fully text based and will always remain that. Users copy and paste full pages of text from the game and paste the copied information into my site. I run a series of regular expressions against the pasted data and break it down. I then insert anywhere from 5 values to over 30 values into the DB based on that one paste. I then take those values and run queries against them to display the information back in a VERY simple and easy to understand way. The game is team based and each team has 25 users to it. So each team is a group and each row is ONE users information. The users can update all 25 rows or just one row at a time. I require storing things into cache because the site is very slow doing over 1,000 queries almost every minute. So here is the deal. Imagine I have an excel spreadsheet with 100 columns and 5000 rows. Each row has two unique identifiers. One for the row it self and one to group together 25 rows a piece. There are about 10 columns in the row that will almost never change and the other 90 columns will always be changing. We can say some will even change in a matter of seconds depending on how fast the row is updated. Rows can also be added and deleted from the group, but not from the database. The rows are taken from about 4 queries from the database to show the most recent and updated data from the database. So every time something in the database is updated, I would also like the row to be updated. If a row or a group has not been updated in 12 or so hours, it will be taken out of Cache. Once the user calls the group again via the DB queries. They will be placed into Cache. The above is what I would like. That is the wish. In Reality, I still have all the rows, but the way I store them in Cache is currently broken. I store each row in a class and the class is stored in the Server Cache via a HUGE list. When I go to update/Delete/Insert items in the list or rows, most the time it works, but sometimes it throws errors because the cache has changed. I want to be able to lock down the cache like the database throws a lock on a row more or less. I have DateTime stamps to remove things after 12 hours, but this almost always breaks because other users are updating the same 25 rows in the group or just the cache has changed. This is an example of how I add items to Cache, this one shows I only pull the 10 or so columns that very rarely change. This example all removes rows not updated after 12 hours: DateTime dt = DateTime.UtcNow; if (HttpContext.Current.Cache["GetRows"] != null) { List<RowIdentifiers> pis = (List<RowIdentifiers>)HttpContext.Current.Cache["GetRows"]; var ch = (from xx in pis where xx.groupID == groupID where xx.rowID== rowID select xx).ToList(); if (ch.Count() == 0) { var ck = GetInGroupNotCached(rowID, groupID, dt); //Pulling the group from the DB for (int i = 0; i < ck.Count(); i++) pis.Add(ck[i]); pis.RemoveAll((x) => x.updateDateTime < dt.AddHours(-12)); HttpContext.Current.Cache["GetRows"] = pis; return ck; } else return ch; } else { var pis = GetInGroupNotCached(rowID, groupID, dt);//Pulling the group from the DB HttpContext.Current.Cache["GetRows"] = pis; return pis; } On the last point, I remove items from the cache, so the cache doesn't actually get huge. To re-post the question, Whats a better way of doing this? Maybe and how to put locks on the cache? Can I get better than this? I just want it to stop breaking when removing or adding rows.

    Read the article

  • How to add jquery lightbox to content added to page via ajax?

    - by laurenmichell
    I am loading a gallery onto a page using the Instagram API. The AJAX looks something like this $.ajax ({ type: 'GET', dataType: 'jsonp', cache: false, url: 'https://api.instagram.com/v1/tags/food/media/recent?client_id='+instagramCID, success: function(data) { for (i in data.data) { $('.instagram').append('<div class="instagram-placeholder"><a href="' + data.data[i].images.standard_resolution.url + '" title="Photo via '+ data.data[i].user.username +' on Instagram" rel="lightbox[gallery]"><img class="instagram-image" src="' + data.data[i].images.thumbnail.url +'"/></a></div>'); } } }); The HTML renders something like this after the AJAX has loaded the content to the page: <a href="http://distilleryimage1.instagram.com/5184cfc4754211e181bd12313817987b_7.jpg" title="Photo via washingtonwoman on Instagram" rel="lightbox[gallery]"><img class="instagram-image" src="http://distilleryimage1.instagram.com/5184cfc4754211e181bd12313817987b_5.jpg"></a> I know I need to load lightbox after the dynamic content is added to the page, but can't seem to figure out how to do that. All the other advice I've tried from stackoverflow has created crazy recursiveness that has crashed my browser. Using this jquery lightbox plugin: http://leandrovieira.com/projects/jquery/lightbox/

    Read the article

  • How to send message from one dialog to another?

    - by zim22
    Hi! I was given a task. First dialog based application has 4 buttons (up, down, left, right). Second dialog based application has two controls (e.g. text area, button). When on the first dialog I click "left" button - controls on the second dialog must move to the left. But unfortunately I don't know Win32 API at all. How can I implement it? What kind of Win32 API mechanism should I be using? Thanks.

    Read the article

  • How to get rid of this error in asp.net-mvc?

    - by Pandiya Chendur
    I am using Linq-to-sql as an ORM. I wrote this innerjoin public IQueryable<Material> FindAllMaterials() { var materials=from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id select new { m.Mat_id, m.Mat_Name, Mt.Name, m.Mat_Type }; return materials; } But when i compile this i got an error Cannot implicitly convert type 'System.Linq.IQueryable<AnonymousType#1>' to 'System.Linq.IQueryable<CrMVC.Models.Material>'. An explicit conversion exists (are you missing a cast?) Am i missing some thing... Any suggestion....

    Read the article

  • Creating a session user login php

    - by user2419393
    I'm stuck on how to create a session for a user who logs in. I got the part of checking to make sure the log in information corresponds with the database information, but is stuck on how to take the email address and store into a session. Here is my php code below. <?php include '../View/header.php'; session_start(); require('../model/database.php'); $email = $_POST['username']; $password = $_POST['password']; $sql = "SELECT emailAddress FROM customers WHERE emailAddress ='$email' AND password = '$password'"; $result = mysql_query($sql, $db); if (!$result) { echo "DB Error, could not query the database\n"; echo 'MySQL Error: ' . mysql_error(); exit; } while ($row = mysql_fetch_assoc($result)) { echo $row['emailAddress']; } mysql_free_result($result); ?>

    Read the article

  • How can I kill MySQL queries every 60 seconds in Windows?

    - by Ethan Allen
    I want to check my MySQL server every minute and kill queries that have run longer than 150 seconds. The main reason I want to do this is because I don't want queries from certain people to lock up the DB for everyone else. I know this is not the ultimate solution to the problem, but at least it's a fallback in case something goes wrong with a query. I don't have a slave DB (this is just an at-home project). I'd like to schedule a script to run that does this for me. I'm unfamiliar with Perl or Ruby and I need it done on my Windows 2008 Server box. I've looked into creating a simple cmd line script, but that doesn't seem to be possible. I know currently I can do something like this but I have to do it manually: mysqladmin processlist mysqladmin kill Anyone have any ideas or examples on how I could do this?

    Read the article

  • mysql stored procedures using php

    - by neo skosana
    I have a stored procedure: delimiter // create procedure userlogin(in eml varchar(50)) begin select * from users where email = eml; end// delimiter ; And the php: $db = new mysqli("localhost","root","","houseDB"); $eml = "[email protected]"; $sql = $db-query("CALL userlogin('$eml')"); $result = $sql-fetch_array(); The error that I get from the browser when I run the php script: Fatal error: Call to a member function fetch_array() on a non-object... I am using phpmyadmin version 3.2.4 and mysql client version 5.1.41. Please help. Thank you.

    Read the article

  • MVC framework for huge JEE application

    - by chaKa
    Which MVC-framework is the best option (performance/ease of development) for a web application, that will have + 2 million visits per week. Basically the site is a search engine,but also there will be large amounts of xml parsing, and high db traffic. We are using Java, over Jboss 4.2.3x, with PG as DB, and Solr for the searches. We were thinking on code JSPs with taglibs, and Servlets, but we were feeling like there would be a better alternative, which don't know yet, as we are starting on the Java Web applications world. Any opinions, and shares of your experience will be appreciated! Thanks in advance!

    Read the article

  • Does C++ require a destructor call for each placement new?

    - by Josh Haberman
    I understand that placement new calls are usually matched with explicit calls to the destructor. My question is: if I have no need for a destructor (no code to put there, and no member variables that have destructors) can I safely skip the explicit destructor call? Here is my use case: I want to write C++ bindings for a C API. In the C API many objects are accessible only by pointer. Instead of creating a wrapper object that contains a single pointer (which is wasteful and semantically confusing). I want to use placement new to construct an object at the address of the C object. The C++ object will do nothing in its constructor or destructor, and its methods will do nothing but delegate to the C methods. The C++ object will contain no virtual methods. I have two parts to this question. Is there any reason why this idea will not work in practice on any production compiler? Does this technically violate the C++ language spec?

    Read the article

  • [java] how to parse XML document?

    - by user32167
    I have xml document in variable (not in file). How can i get data storaged in that? I don't have any additional file with that, i have it 'inside' my sourcecode. When i use DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(XML); (XML is my xml variable), i get an error java.io.FileNotFoundException: C:\netbeans\app-s7013\<network ip_addr="10.0.0.0\8" save_ip="true"> File not found.

    Read the article

  • 42000 Syntax error in query when executing prepared statement

    - by Griff McGriff
    I have been pulling my hair out trying to swap my current script over to PDO. I have simplified the MySQL query for this example, but the error remains even with this version. $sql = 'SELECT * FROM :table WHERE lastUpdate > :appDate'; try{ $db = connect(); $stmt = $db->prepare($sql); $stmt->bindParam(':table', $table); $stmt->bindParam(':appDate', $appDate); foreach($tablesToCheck as $table){ $stmt->execute(); $resultset[] = $stmt->fetchAll(); } } catch(PDOException $e){ print 'Error!: '.$e->getMessage().'<br/>'; }//End try catch $stmt-errorInfo() returns: ( [0] => 42000 [1] => 1064 [2] => You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''GroupName' WHERE lastUpdate > NULL' at line 1 )

    Read the article

  • Dynamic where clause in LINQ - with column names available at runtime

    - by sandesh247
    Disclaimer: I've solved the problem using Expressions from System.Linq.Expressions, but I'm still looking for a better/easier way. Consider the following situation : var query = from c in db.Customers where (c.ContactFirstName.Contains("BlackListed") || c.ContactLastName.Contains("BlackListed") || c.Address.Contains("BlackListed")) select c; The columns/attributes that need to be checked against the blacklisted term are only available to me at runtime. How do I generate this dynamic where clause? An additional complication is that the Queryable collection (db.Customers above) is typed to a Queryable of the base class of 'Customer' (say 'Person'), and therefore writing c.Address as above is not an option.

    Read the article

  • New replicaset resident memory is larger than the existing sets

    - by eded
    From the mongodb tutorial of how to resync a set, I wipe all the files in /data/db and restart the mongod process to resync the data. Everything looks ok, I get the same number of documents as the existing two sets(primary and one secondary). However, when I check the memory on MMS. it shows me my new resynced set/mongod process has a different memory status value than the other two. For existing twos using db.serverStatus.mem shows like the following: "mem" : { "bits" : 64, "resident" : 239, "virtual" : 66348, "supported" : true, "mapped" : 32865, "mappedWithJournal" : 65730 } however, the new resynced set shows like: "mem" : { "bits" : 64, "resident" : 1239, "virtual" : 52447, "supported" : true, "mapped" : 25700, "mappedWithJournal" : 51400 } the resynced resident memory is 6-10 times more than the existing ones. I wouder if it is normal because all data comes in suddenly during the resyncing?? and even virtual and mapped value are different too. Can anyone explain?? thanks

    Read the article

  • JAVA: XML parsers gives null element

    - by Johan
    When I try to parse a XML-file, it gives sometimes a null element by the title. I think it has to do with HTML-tags &#039; How can I solve this problem? I have the follow XML-file: <item> <title>&#039; Nieuwe DVD &#039;</title> <description>tekst, tekst tekst</description> <link>dvd.html</link> <category>nieuws</category> <pubDate>Sat, 1 Jan 2011 9:24:00 +0000</pubDate> </item> And the follow code to parse the xml-file: //DocumentBuilderFactory, DocumentBuilder are used for //xml parsing DocumentBuilderFactory dbf = DocumentBuilderFactory .newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); //using db (Document Builder) parse xml data and assign //it to Element Document document = db.parse(is); Element element = document.getDocumentElement(); //take rss nodes to NodeList element.normalize(); NodeList nodeList = element.getElementsByTagName("item"); if (nodeList.getLength() > 0) { for (int i = 0; i < nodeList.getLength(); i++) { //take each entry (corresponds to <item></item> tags in //xml data Element entry = (Element) nodeList.item(i); entry.normalize(); Element _titleE = (Element) entry.getElementsByTagName( "title").item(0); Element _categoryE = (Element) entry .getElementsByTagName("category").item(0); Element _pubDateE = (Element) entry .getElementsByTagName("pubDate").item(0); Element _linkE = (Element) entry.getElementsByTagName( "link").item(0); String _title = _titleE.getFirstChild().getNodeValue(); String _category = _categoryE.getFirstChild().getNodeValue(); Date _pubDate = new Date(_pubDateE.getFirstChild().getNodeValue()); String _link = _linkE.getFirstChild().getNodeValue(); //create RssItemObject and add it to the ArrayList RssItem rssItem = new RssItem(_title, _category, _pubDate, _link); rssItems.add(rssItem); conn.disconnect(); }

    Read the article

  • Get information from various sources

    - by Francesc
    Hi. I'm developing an app that has to get some information from various sources (APIs and RSS) and display it to the user in near real-time. What's the best way to get it: 1.Have a cron job to update them all accounts every 12h, and when a user is requesting one, update that account, save it to the DB and show it to the user? 2.Have a cron job to update them all accounts every 6h, and when a user is requesting one, update the account and showing it to the user without saving it to the DB? What's the best way to get it? What's faster? And what's the most scallable?

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

  • is there a limit on the number of times navigator.geolocation.getCurrentPostion can be called ?

    - by Raja
    Hi all, This is may not be a true programming question but deals with geolocation Api, hence hoping StackOverflow is the right place for this. I'm calling the navigator.geolocation.getCurrentPosition at every 3 seconds interval. After 10-15 tries the responses stop. So i'm wondering is there a limit on the number of calls being made, Or is it because i'm testing it with a desktop and hence instead of giving back the same response each time the API is waiting for a change of location. Anyone has any experiences to share ? Thanks

    Read the article

  • Is there an open source repository for SQL code?

    - by morpheous
    I find myself writing SQL code (queries or stored procs) to solve problems that can definitely be defined as 'patterns' that occur frequently in business. Rather than having to wrack my brain each time I encounter a new problem (which must have been solved a countless times by other coders/db analysts, I wondered if there was a repository I could go to check out (peer reviewed) code - and maybe add my two pence every now and then. I know different db vendors tend to write slightly variant forms of SQL - but there could still be a repository with ANSI stuff and proprietary stuff. Hopefully, such a site would encourage more people to write standardized SQL. Is there such a site?. If no - why not? (would anyone else be interested in such a site?) If such a site exists, please provide link(s), as Google is not finding anything remotely useful.

    Read the article

  • Cant get the proper use for DropDownListFor with a model and a viewbag element

    - by EH_warch
    I have a list of locations set in the ViewBag element like this: public ActionResult Create() { var db = new ErrorReportingSystemContext(); IEnumerable<SelectListItem> items = db.Locations .AsEnumerable() .Select(c => new SelectListItem { Value =c.id.ToString(), Text = c.location_name }); ViewBag.locations = items; return View(); } I'm trying to get the values from ViewBag.locations from the view like this @Html.DropDownListFor(model => model.location_fk_id, ViewBag.locations); //@Html.DropDownListFor(model => model.location_fk_id, @ViewBag.locations); //@Html.DropDownListFor(model => model.location_fk_id, "locations"); But no avail. How can i use it?

    Read the article

  • Java: Allowing the child thread to kill itself on InterruptedException?

    - by Zombies
    I am using a ThreadPool via ExecutorService. By calling shutDownNow() it interrupts all running threads in the pool. When this happens I want these threads to give up their resources (socket and db connections) and simply die, but without continuing to run anymore logic, eg: inserting anything into the DB. What is the simplest way to achieve this? Bellow is some sample code: public void threadTest() { Thread t = new Thread(new Runnable() { public void run() { try { Thread.sleep(999999); } catch (InterruptedException e) { //invoke thread suicide logic here } } }); t.start(); t.interrupt(); try { Thread.sleep(4000); } catch (InterruptedException e) { } }

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >