Search Results

Search found 10177 results on 408 pages for 'thumbs db'.

Page 349/408 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Rails Heroku Migrate Unknown Error

    - by Ryan Max
    Hello. I am trying to get my app up and running on heroku. However once I go to migrate I get the following error: $ heroku rake db:migrate rake aborted! An error has occurred, this and all later migrations canceled: 530 5.7.0 Must issue a STARTTLS command first. bv42sm676794ibb.5 (See full trace by running task with --trace) (in /disk1/home/slugs/155328_f2d3c00_845e/mnt) == BortMigration: migrating ================================================= -- create_table(:sessions) -> 0.1366s -- add_index(:sessions, :session_id) -> 0.0759s -- add_index(:sessions, :updated_at) -> 0.0393s -- create_table(:open_id_authentication_associations, {:force=>true}) -> 0.0611s -- create_table(:open_id_authentication_nonces, {:force=>true}) -> 0.0298s -- create_table(:users) -> 0.0222s -- add_index(:users, :login, {:unique=>true}) -> 0.0068s -- create_table(:passwords) -> 0.0123s -- create_table(:roles) -> 0.0119s -- create_table(:roles_users, {:id=>false}) -> 0.0029s I'm not sure exactly what it means. Or really what it means at all. Could it have to do with my Bort installation? I did remove all the open-id stuff from it. But I never had any problems with my migrations locally. Additionally on Bort the Restful Authentication uses my gmail stmp to send confirmation emails...all the searches on google i do on STARTTLS have to do with stmp. Can someone point me in the right direction?

    Read the article

  • How to get JSON code into MYSQL database?

    - by the_boy_za
    I've created this form with a jQuery autocomplete function. The selected brand from the autocomplete list needs to get sent to a PHP file using $.ajax function. My code doesn't seem to work but i can't find the error. I don't know why the data isn't getting inserted into MYSQL database. Here is my code: JQUERY: $(document).ready(function() { $("#autocomplete").autocomplete({ minLength: 2 }); $("#autocomplete").autocomplete({ source: ["Adidas", "Airforce", "Alpha Industries", "Asics", "Bikkemberg", "Birkenstock", "Bjorn Borg", "Brunotti", "Calvin Klein", "Cars Jeans", "Chanel", "Chasin", "Diesel", "Dior", "DKNY", "Dolce & Gabbana"] }); $("#add-brand").click(function() { var merk = $("#autocomplete").val(); $("#selected-brands").append(" <a class=\"deletemerk\" href=\"#\">" + merk + "</a>"); //Add your parameters here var param = JSON.stringify({ Brand: merk }); $.ajax({ type: "POST", async: true, url: "scripttohandlejson.php", contentType: "application/json", data: param, dataType: "json", success: function (good){ //handle success alert(good) }, failure: function (bad){ //handle any errors alert(bad) } }); return false; }); }); PHP FILE: scripttohandlejson.php <?PHP $getcontent = json_decode($json, true); $getcontent->{'Brand'}; $vraag ="INSERT INTO kijken (merk) VALUES ='$getcontent' "; $voerin = mysql_query($vraag) or die("couldnt put into db"); <?

    Read the article

  • security deleting a mysql row with jQuery $.post

    - by FFish
    I want to delete a row in my database and found an example on how to do this with jQuery's $.post() Now I am wondering about security though.. Can someone send a POST request to my delete-row.php script from another website? JS function deleterow(id) { // alert(typeof(id)); // number if (confirm('Are you sure want to delete?')) { $.post('delete-row.php', {album_id:+id, ajax:'true'}, function() { $("#row_"+id).fadeOut("slow"); }); } } PHP: delete-row.php <?php require_once("../db.php"); mysql_connect(DB_SERVER, DB_USER, DB_PASSWORD) or die("could not connect to database " . mysql_error()); mysql_select_db(DB_NAME) or die("could not select database " . mysql_error()); if (isset($_POST['album_id'])) { $query = "DELETE FROM albums WHERE album_id = " . $_POST['album_id']; $result = mysql_query($query); if (!$result) die('Invalid query: ' . mysql_error()); echo "album deleted!"; } ?>

    Read the article

  • Saving and retrieving image in SQL database from C# problem

    - by Mobin
    I used this code for inserting records in a person table in my DB System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); this.personTableAdapter1.Insert(NIC_Box.Text.Trim(), Application_Box.Text.Trim(), Name_Box.Text.Trim(), Father_Name_Box.Text.Trim(), DOB_TimePicker.Value.Date, Address_Box.Text.Trim(), City_Box.Text.Trim(), Country_Box.Text.Trim(), ms.GetBuffer()); but when i retrieve this with this code byte[] image = (byte[])Person_On_Application.Rows[0][8]; MemoryStream Stream = new MemoryStream(); Stream.Write(image, 0, image.Length); Bitmap Display_Picture = new Bitmap(Stream); Image_Box.Image = Display_Picture; it works perfectly fine but if i update this with my own generated Query like System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); Query = "UPDATE Person SET Person_Image ='"+ms.GetBuffer()+"' WHERE (Person_NIC = '"+NIC_Box.Text.Trim()+"')"; the next time i use the same code for retrieving the image and displaying it as used above . Program throws an exception

    Read the article

  • How to use SQLErrorCodeSQLExceptionTranslator and DAO class with @Repository in Spring?

    - by GuidoMB
    I'm using Spring 3.0.2 and I have a class called MovieDAO that uses JDBC to handle the db. I have set the @Repository annotations and I want to convert the SQLException to the Spring's DataAccessException I have the following example: @Repository public class JDBCCommentDAO implements CommentDAO { static JDBCCommentDAO instance; ConnectionManager connectionManager; private JDBCCommentDAO() { connectionManager = new ConnectionManager("org.postgresql.Driver", "postgres", "postgres"); } static public synchronized JDBCCommentDAO getInstance() { if (instance == null) instance = new JDBCCommentDAO(); return instance; } @Override public Collection<Comment> getComments(User user) throws DAOException { Collection<Comment> comments = new ArrayList<Comment>(); try { String query = "SELECT * FROM Comments WHERE Comments.userId = ?"; Connection conn = connectionManager.getConnection(); PreparedStatement stmt = conn.prepareStatement(query); stmt = conn.prepareStatement(query); stmt.setInt(1, user.getId()); ResultSet result = stmt.executeQuery(); while (result.next()) { Movie movie = JDBCMovieDAO.getInstance().getLightMovie(result.getInt("movie")); comments.add(new Comment(result.getString("text"), result.getInt("score"), user, result.getDate("date"), movie)); } connectionManager.closeConnection(conn); } catch (SQLException e) { e.printStackTrace(); //CONVERT TO DATAACCESSEXCEPTION } return comments; } } I Don't know how to get the Translator and I don't want to extends any Spring class, because that is why I'm using the @Repository annotation

    Read the article

  • SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

    - by Albert
    I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code: USE mydb GO BACKUP LOG mydb WITH TRUNCATE_ONLY GO DBCC SHRINKFILE(mydb_log,8) GO Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick. Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know). Here's my current backup plan: Full backups every night Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though) Or maybe I just run it once a week, after I run a full backup task? What do you all think?

    Read the article

  • Auditing in Entity Framework.

    - by Gabriel Susai
    After going through Entity Framework I have a couple of questions on implementing auditing in Entity Framework. I want to store each column values that is created or updated to a different audit table. Rightnow I am calling SaveChanges(false) to save the records in the DB(still the changes in context is not reset). Then get the added | modified records and loop through the GetObjectStateEntries. But don't know how to get the values of the columns where their values are filled by stored proc. ie, createdate, modifieddate etc. Below is the sample code I am working on it. //Get the changed entires( ie, records) IEnumerable<ObjectStateEntry> changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified); //Iterate each ObjectStateEntry( for each record in the update/modified collection) foreach (ObjectStateEntry entry in changes) { //Iterate the columns in each record and get thier old and new value respectively foreach (var columnName in entry.GetModifiedProperties()) { string oldValue = entry.OriginalValues[columnName].ToString(); string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, oldvalue, newvalue } } changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added); foreach (ObjectStateEntry entry in changes) { if (entry.IsRelationship) continue; var columnNames = (from p in entry.EntitySet.ElementType.Members select p.Name).ToList(); foreach (var columnName in columnNames) { string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, value } }

    Read the article

  • Setting objects (not users) inactive after period of time in asp.net mvc

    - by bastijn
    This question is mainly to verify my current idea. I have a series of objects which I want to be active for a specified amount of time. For instance objects like ads which are shown in the ad space only for the amount of time bought, objects in search results which should only pop up when active, and frontpage posts which should be set to inactive after a prespecified time. My current idea is to just give those type of objects a StartDate and EndDate and filter the search routines to only show results which fall in the range StartDate < currentDate < EndDate. Is this the normal structure or should there be some sort of auto-routine which routinely checks for objects which are "over-time" and set a property "inactive" to true or something. Seems like this approach is such a hassle since I need a checker which runs say, every 5 minutes, to scan all DB objects. Seems like a bad idea to me. So is the first structure the most commonly used or are there any other options? When searching on google or SO the search queries only return results setting users inactive.

    Read the article

  • iPhone OS: Why is my managedModelObject not complying with Key Value Coding?

    - by nickthedude
    Ok so I'm trying to build this stat tracker for my app and I have built a data model object called statTracker that keeps track of all the stuff I want it to. I can set and retrieve values using the selectors, but if I try and use KVC (ie setValue: forKey: ) everything goes bad and says my StatTracker class is not KVC compliant: valueForUndefinedKey:]: the entity StatTracker is not key value coding-compliant for the key "timesLauched".' 2010-05-18 15:55:08.573 here's the code that is triggering it: NSArray *statTrackerArray = [[NSArray alloc] init]; statTrackerArray = [[CoreDataSingleton sharedCoreDataSingleton] getStatTracker]; NSNumber *number1 = [[NSNumber alloc] init]; number1 = [NSNumber numberWithInt:(1 + [[(StatTracker *)[statTrackerArray objectAtIndex:0] valueForKey:@"timesLauched"] intValue])]; [(StatTracker *)[statTrackerArray objectAtIndex:0] setValue:number1 forKey:@"timesLaunched" ]; NSError *error; if (![[[CoreDataSingleton sharedCoreDataSingleton] managedObjectContext] save:&error]) { NSLog(@"error writing to db"); } Not sure if this is enough code for you folks let me know what you need if you do need more. This would be so sweet if I could use KVC because I could then abstract all this stat tracking stuff into a single method call with a string argument for the value in question. At least that is what I hope to accomplish here. I'm actually now understanding the power of KVC but now I'm just trying to figure out how to make it work. Thanks! Nick

    Read the article

  • Convert UCS-2 characters to UTF-8 Using C#

    - by quanticle
    I'm pulling some internationalized text from a MS SQL Server 2005 database. As per the defaults for that DB, the characters are stored as UCS-2. However, I need to output the data in UTF-8 format, as I'm sending it out over the web. Currently, I have the following code to convert: SqlString dbString = resultReader.GetSqlString(0); byte[] dbBytes = dbString.GetUnicodeBytes(); byte[] utf8Bytes = System.Text.Encoding.Convert(System.Text.Encoding.Unicode, System.Text.Encoding.UTF8, dbBytes); System.Text.UTF8Encoding encoder = new System.Text.UTF8Encoding(); string outputString = encoder.GetString(utf8Bytes); However, when I examine the output in the browser, it appears to be garbage, no matter what I set the encoding to. What am I missing? EDIT: In response to the answers below, the reason I thought I had to perform a conversion is because I can output literal multibyte strings just fine. For example: OutputControl.Text = "????????????????????????????????????????????????????????????????"; works. Here, OutputControl is an ASP.Net Literal. However, OutputControl.Text = outputString; //Output from above snippet results in mangled output as described above. My hypothesis was that the database's output was somehow getting mangled by ASP.Net. If that's not the case, then what are some other possibilities?

    Read the article

  • How should I design my MYSQL table/s?

    - by yaya3
    I built a really basic php/mysql site for an architect that uses one 'projects' table. The website showcases various projects that he has worked on. Each project contained one piece of text and one series of images. Original projects table (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL auto_increment, `project_name` text, `project_text` text, `image_filenames` text, `image_folder` text, `project_pdf` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; The client now requires the following, and I'm not sure how to handle the expansions in my DB. My suspicion is that I will need an additional table. Each project now have 'pages'. Pages either contain... One image One "piece" of text One image and one piece of text. Each page could use one of three layouts. As each project does not currently have more than 4 pieces of text (a very risky assumption) I have expanded the original table to accommodate everything. New projects table attempt (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL AUTO_INCREMENT, `project_name` text, `project_pdf` text, `project_image_folder` text, `project_img_filenames` text, `pages_with_text` text, `pages_without_img` text, `pages_layout_type` text, `pages_title` text, `page_text_a` text, `page_text_b` text, `page_text_c` text, `page_text_d` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; In trying to learn more about MYSQL table structuring I have just read an intro to normalization and A Simple Guide to Five Normal Forms in Relational Database Theory. I'm going to keep reading! Thanks in advance

    Read the article

  • EF Forced Concurrency Checks

    - by Imran
    Hi, I have an issue with EF 4.0 that I hope someone can help with. I currently have an entity that I want to update in a last in wins fashion (i.e. ignore concurrency checks and just overwrite whats in the db with what is submitted). It seems Entity Framework not only includes the primary key of the entity in the where clause of the generated sql, but also any foreign key fields. This is annoying as it means that I don't get true last in wins semantics and need to know what value the fk field had before the update or I get a concurrency exception. I am aware that this can be short circuited by including a foreign key field as well as the navigation property on the entity. I would like to avoid this if possible as it's not a very clean solution. I was just wondering if there was any other way to override this behaviour? It seems like more of a bug than a feature. I have no problem with ef doing concurrency checks if I instruct it to do so but not being able to bypass concurrency completely is a bit of a hindrance as there are many valid scenarios where this is not needed

    Read the article

  • Long text input from user and PDF generation

    - by Petteri Hietavirta
    I have built a web application that can be seen as an overcomplicated application form. There are bunch of text areas with a given character limit. After the form submission various things happen and one of them is PDF generation. The text is queried from the DB and inserted in the PDF template created in iReports. This works fine but the major pain is overflowing text. The maximum number of characters is set based on 'average' text. But sometimes people prefer to write with CAPS or add plenty of linefeeds to format their text. These then cause user's text to overflow the space given in PDF. Unfortunately the PDF document must look like a real application form so I cannot allow unlimited space. What kind of approaches you have used to tackle this? Clean/restrict user input? Calculate the space requirement of the text based on font metrics? Provide preview of the PDF? (too bad users are not allowed to change their input after submission...)

    Read the article

  • How to insert and call by row and column into sqlite3 python

    - by user291071
    Lets say i have a simple array of x rows and y columns with corresponding values, What is the best method to do 3 things? How to insert, update a value at a specific row column? How to select a value for each row and column, import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() c.execute('''create table simple (links text)''') con.commit() dic = {'x1':{'y1':1.0,'y2':0.0},'x2':{'y1':0.0,'y2':2.0,'y3':1.5},'x3':{'y2':2.0,'y3':1.5}} ucols = {} ## my current thoughts are collect all row values and all column values from dic and populate table row and columns accordingly how to call by row and column i havn't figured out yet ##populate rows in first column for row in dic: print row c.execute("""insert into simple ('links') values ('%s')"""%row) con.commit() ##unique columns for row in dic: print row for col in dic[row]: print col ucols[col]=dic[row][col] ##populate columns for col in ucols: print col c.execute("alter table simple add column '%s' 'float'" % col) con.commit() #functions needed ##insert values into sql by row x and column y?how to do this e.g. x1 and y2 should put in 0.0 ##I tried as follows didn't work for row in dic: for col in dic[row]: val =dic[row][col] c.execute("""update simple SET '%s' = '%f' WHERE 'links'='%s'"""%(col,val,row)) con.commit() ##update value at a specific row x and column y? ## select a value at a specific row x and column y?

    Read the article

  • How to perform undirected graph processing from SQL data

    - by recipriversexclusion
    I ran into the following problem in dynamically creating topics for our ActiveMQ system: I have a number of processes (M_1, ..., M_n), where n is not large, typically 5-10. Some of the processes will listen to the output of others, through a message queue; these edges are specified in an XML file, e.g. <link from="M1" to="M3"</link> <link from="M2" to="M4"</link> <link from="M3" to="M4"</link> etc. The edges are sparse, so there won't be many of them. I will parse this XML and store this information in an SQL DB, one table for nodes and another for edges. Now, I need to dynamically create strings of the form M1.exe --output_topic=T1 M2.exe --output_topic=T2 M3.exe --input_topic=T1 --output_topic=T3 M4.exe --input_topic=T2 --input_topic=T3 where the tags are sequentially generated. What is the best way to go about querying SQL to obtain these relationships? Are there any tools or other tutorials you can point me to? I've never done graps with SQL. Using SQL is imperative, because we use it for other stuff, too. Thanks!

    Read the article

  • GET and XMLHttpRequest

    - by Neethusha
    i have an XMLHttpRequest.The request passes a parameter to my php server code in /var/www. But i cannot seem to be able to extract the parameter back at the server side. below i have pasted both the codes: javascript: function getUsers(u) { alert(u);//here u is 'http://start.ubuntu.com/9.10' xmlhttp=new XMLHttpRequest(); var url="http://localhost/servercode.php"+"?q="+u; xmlhttp.onreadystatechange= useHttpResponse; xmlhttp.open("GET",url,true); xmlhttp.send(null); } function useHttpResponse() { if (xmlhttp.readyState==4 ) { var response = eval('('+xmlhttp.responseText+')'); for(i=0;i<response.Users.length;i++) alert(response.Users[i].UserId); } } servercode.php: <?php $q=$_GET["q"]; //$q="http://start.ubuntu.com/9.10"; $con=mysql_connect("localhost","root","blaze"); if(!$con) {die('could not connect to database'.mysql.error()); } mysql_select_db("BLAZE",$con) or die("No such Db"); $result=mysql_query("SELECT * FROM USERURL WHERE URL='$q'"); if($result == null) echo 'nobody online'; else { header('Content-type: text/html'); echo "{\"Users\":["; while($row=mysql_fetch_array($result)) { echo '{"UserId":"'.$row[UsrID].'"},'; } echo "]}"; } mysql_close($con); ?> this is not giving the required result...although the commented statement , where the variable is assigned explicitly the value of the argument works...it alerts me the required output...but somehow the GET method's parameter is not reaching my php or thats how i think it is....pls help....

    Read the article

  • How do I make software that preserves database integrity and correctness?

    - by user287745
    I have made an application project in Visual Studio 2008 C#, SQL Server from Visual Studio 2008. The database has like 20 tables and many fields in each. I have made an interface for adding deleting editing and retrieving data according to predefined needs of the users. Now I have to Make to project into software which I can deliver to my professor. That is, he can just double click the icon and the software simply starts. No Visual Studio 2008 needed to start the debugging. The database will be on one powerful computer (dual core latest everything Windows XP) and the user will access it from another computer connected using LAN. I am able to change the connection string to the shared database using Visual Studio 2008/ debugger whenever the server changes but how am I supposed to do that when it's software? There will by many clients. Am I supposed to give the same software to every one, so they all can connect to the database? How will the integrity and correctness of the database be maintained? I mean the db.mdf file will be in a folder which will be shared with read and write access. So it's not necessary that only one user will write at a time. So is there any coding for this or?

    Read the article

  • Ajax long polling (comet) + php on lighttpd v1.4.22 multiple instances problem.

    - by fibonacci
    Hi, I am new to this site, so I really hope I will provide all the necessary information regarding my question. I've been trying to create a "new message arrived notification" using long polling. Currently I am initiating the polling request by window.onLoad event of each page in my site. On the server side I have an infinite loop: while(1){ if(NewMessageArrived($current_user))break; sleep(10); } echo $newMessageCount; On the client side I have the following (simplified) ajax functions: poll_new_messages(){ xmlhttp=GetXmlHttpObject(); //... xmlhttp.onreadystatechange=got_new_message_count; //... xmlhttp.send(); } got_new_message_count(){ if (xmlhttp.readyState==4){ updateMessageCount(xmlhttp.responseText); //... poll_new_messages(); } } The problem is that with each page load, the above loop starts again. The result is multiple infinite loops for each user that eventually make my server hang. *The NewMessageArived() function queries MySQL DB for new unread messages. *At the beginning of the php script I run start_session() in order to obtain the $current_user value. I am currently the only user of this site so it is easy for me to debug this behavior by writing time() to a file inside this loop. What I see is that the file is being written more often than once in 10 seconds, but it starts only when I go from page to page. Please let me know if any additional information might help. Thank you.

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

  • Dynamically created controls and the ASP.NET page lifecycle

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • Using MongoDB's map/reduce to "group by" two fields

    - by ibz
    I need something slightly more complex than the examples in the MongoDB docs and I can't seem to be able to wrap my head around it. Say I have a collection of objects of the form {date: "2010-10-10", type: "EVENT_TYPE_1", user_id: 123, ...} Now I want to get something similar to a SQL GROUP BY query, grouping over both date and type. That is, I want the number of events of each type in each day. Also, I'd like to make it unique by user_id, ie. if a user has more events in the same day, count it only once. I'm trying to do this with map/reduce. I do db.logs.mapReduce(function() { emit(this.type, 1); }, function(k, vals) { var total = 0; for (var i = 0; i < vals.length; i++) total += vals[i]; return total; }}) which nicely groups by type, but now, how can I group by date at the same time? Seems the key in emit() can't be an array (I thought about doing emit([this.date, this.type], 1)). Also, how can I ensure the per-user uniqueness? I'm just starting with MongoDB and I'm still having trouble grasping the basic concepts. Also, there is not much documentation available out there. Any help from more experienced users is appreciated. Thanks!

    Read the article

  • Entity Framework - refresh objects from database

    - by Nebo
    I'm having trouble with refreshing objects in my database. I have an two PC's and two applications. On the first PC, there's an application which communicates with my database and adds some data to Measurements table. On my other PC, there's an application which retrives the latest Measurement under a timer, so it should retrive measurements added by the application on my first PC too. The problem is it doesn't. On my application start, it caches all the data from database and never get new data added. I use Refresh() method which works well when I change any of the cached data, but it doesn't refresh newly added data. Here is my method which should update the data: public static Entities myEntities = new Entities(); public static Measurement GetLastMeasurement(int conditionId) { myEntities.Refresh(RefreshMode.StoreWins, myEntities.Measurements); return (from measurement in myEntities.Measurements where measurement.ConditionId == conditionId select measurement).OrderByDescending(cd => cd.Timestamp).First(); } P.S. Applications have different connection strings in app.config (different accounts for the same DB).

    Read the article

  • Order in many to many relation in Django model

    - by Pietro Speroni
    I am writing a small website to store the papers I have written. The relation papers<- author is important, but the order of the name of the authors (which one is First Author, which one is second order, and so on) is also important. I am just learning Django so I don't know much. In any case so far I have done: from django.db import models class author(models.Model): Name = models.CharField(max_length=60) URLField = models.URLField(verify_exists=True, null=True, blank=True) def __unicode__(self): return self.Name class topic(models.Model): TopicName = models.CharField(max_length=60) def __unicode__(self): return self.TopicName class publication(models.Model): Title = models.CharField(max_length=100) Authors = models.ManyToManyField(author, null=True, blank=True) Content = models.TextField() Notes = models.TextField(blank=True) Abstract = models.TextField(blank=True) pub_date = models.DateField('date published') TimeInsertion = models.DateTimeField(auto_now=True) URLField = models.URLField(verify_exists=True,null=True, blank=True) Topic = models.ManyToManyField(topic, null=True, blank=True) def __unicode__(self): return self.Title This work fine in the sense that I now can define who the authors are. But I cannot order them. How should I do that? Of course I could add a series of relations: first author, second author,... but it would be ugly, and would not be flexible. Any better idea? Thanks

    Read the article

  • eclipse django using wrong settings.py in pythonpath

    - by user1290264
    I have pydev/django installed in eclipse, and it runs fine. However, after adding a second django project to eclipse and running the server ('http://127.0.0.1:8000') the pythonpath seems to be stuck on project2 even when I run project1. As a summary, I have two django projects: project1, project2. When I run the django server for project1 I get: Validating models... 0 errors found Django version 1.5, using settings 'project1.settings' Development server is running at 'http://127.0.0.1:8000/' Quit the server with CTRL-BREAK. The above seems to suggest that django is using the correct settings file; however, when I go to 'http://127.0.0.1:8000/' it displays the urls from project2. Also, if I go to 'http://127.0.0.1:8000/admin' the models are getting pulled from the sqlite.db file in project2 as well. I've even tried removing project2 from eclipse entirely and now at 'http://127.0.0.1:8000/admin' I get this error: Python Path: ['C:\Users\Brad\workspaces\In Progress\project2', 'C:\Users\Brad\workspaces\In Progress\project2', 'C:\Python27\DLLs', 'C:\Python27\lib', 'C:\Python27\lib\plat-win', 'C:\Python27\lib\lib-tk', 'C:\Python27', 'C:\Python27\lib\site-packages', 'C:\Windows\system32\python27.zip'] If I run the server on a different port with project1 the path seems to be fine: runserver 7000 --noreload Then 'http://127.0.0.1:7000/' uses project1's paths, but it doesn't seem like I should have to do this. Note: I have setup the run configurations as correctly as I know how. In the main tab, the project and main module both point to the correct project (project1), and the "PYTHONPATH that will be used in the run:" includes project1. Also, I have cleared my browser history, cookies, and everything that chrome would let me delete.

    Read the article

  • item-not-found(404) when trying to get a node using Smackx pubsub

    - by DustMason
    I'm trying to use the latest Smackx trunk to get and then subscribe to a pubsub node. However, openfire just sends me a back an error: item not found (404). I am instantiating the java objects from ColdFusion, so my code snippets might look funny but maybe someone will be able to tell me what I've forgotten. Here's how I create the node: ftype = createObject("java", "org.jivesoftware.smackx.pubsub.FormType"); cform = createObject("java", "org.jivesoftware.smackx.pubsub.ConfigureForm").init(ftype.submit); cform.setPersistentItems(true); cform.setDeliverPayloads(true); caccess = createObject("java", "org.jivesoftware.smackx.pubsub.AccessModel"); cform.setAccessModel(caccess.open); cpublish = createObject("java", "org.jivesoftware.smackx.pubsub.PublishModel"); cform.setPublishModel(cpublish.open); cform.setMaxItems(99); manager = createObject("java", "org.jivesoftware.smackx.pubsub.PubSubManager").init(XMPPConnection); myNode = manager.createNode("subber", cform); And here's how I am trying to get to it (in a different section of code): manager = createObject("java", "org.jivesoftware.smackx.pubsub.PubSubManager").init(XMPPConnection); myNode = manager.getNode("subber"); Immediately upon creating the node I seem to be able to publish to it like so: payload = createObject("java", "org.jivesoftware.smackx.pubsub.SimplePayload").init("book","pubsub:test:book","<book xmlns='pubsub:test:book'><title>Lord of the Rings</title></book>"); item = createObject("java", "org.jivesoftware.smackx.pubsub.Item").init(payload); myNode.publish(item); However, it is the getNode() call that is causing my code to error. I have verified that the nodes are being created by checking the DB used by my openfire server. I can see them in there, properly attributed as leaf nodes, etc. Any advice? Anyone else out there doing anything with XMPP and ColdFusion? I have had great success sending and receiving messages with CF and Smack just haven't had the pubsub working yet :) Thanks!

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >