Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 723/2386 | < Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >

  • hosting simple python scripts in a container to handle concurrency, configuration, caching, etc.

    - by Justin Grant
    My first real-world Python project is to write a simple framework (or re-use/adapt an existing one) which can wrap small python scripts (which are used to gather custom data for a monitoring tool) with a "container" to handle boilerplate tasks like: fetching a script's configuration from a file (and keeping that info up to date if the file changes and handle decryption of sensitive config data) running multiple instances of the same script in different threads instead of spinning up a new process for each one expose an API for caching expensive data and storing persistent state from one script invocation to the next Today, script authors must handle the issues above, which usually means that most script authors don't handle them correctly, causing bugs and performance problems. In addition to avoiding bugs, we want a solution which lowers the bar to create and maintain scripts, especially given that many script authors may not be trained programmers. Below are examples of the API I've been thinking of, and which I'm looking to get your feedback about. A scripter would need to build a single method which takes (as input) the configuration that the script needs to do its job, and either returns a python object or calls a method to stream back data in chunks. Optionally, a scripter could supply methods to handle startup and/or shutdown tasks. HTTP-fetching script example (in pseudocode, omitting the actual data-fetching details to focus on the container's API): def run (config, context, cache) : results = http_library_call (config.url, config.http_method, config.username, config.password, ...) return { html : results.html, status_code : results.status, headers : results.response_headers } def init(config, context, cache) : config.max_threads = 20 # up to 20 URLs at one time (per process) config.max_processes = 3 # launch up to 3 concurrent processes config.keepalive = 1200 # keep process alive for 10 mins without another call config.process_recycle.requests = 1000 # restart the process every 1000 requests (to avoid leaks) config.kill_timeout = 600 # kill the process if any call lasts longer than 10 minutes Database-data fetching script example might look like this (in pseudocode): def run (config, context, cache) : expensive = context.cache["something_expensive"] for record in db_library_call (expensive, context.checkpoint, config.connection_string) : context.log (record, "logDate") # log all properties, optionally specify name of timestamp property last_date = record["logDate"] context.checkpoint = last_date # persistent checkpoint, used next time through def init(config, context, cache) : cache["something_expensive"] = get_expensive_thing() def shutdown(config, context, cache) : expensive = cache["something_expensive"] expensive.release_me() Is this API appropriately "pythonic", or are there things I should do to make this more natural to the Python scripter? (I'm more familiar with building C++/C#/Java APIs so I suspect I'm missing useful Python idioms.) Specific questions: is it natural to pass a "config" object into a method and ask the callee to set various configuration options? Or is there another preferred way to do this? when a callee needs to stream data back to its caller, is a method like context.log() (see above) appropriate, or should I be using yield instead? (yeild seems natural, but I worry it'd be over the head of most scripters) My approach requires scripts to define functions with predefined names (e.g. "run", "init", "shutdown"). Is this a good way to do it? If not, what other mechanism would be more natural? I'm passing the same config, context, cache parameters into every method. Would it be better to use a single "context" parameter instead? Would it be better to use global variables instead? Finally, are there existing libraries you'd recommend to make this kind of simple "script-running container" easier to write?

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Securing a REST API

    - by Christopher McCann
    I am in the middle of developing a REST API - the first one I ever have. The data being passed through the API is not of such a critical nature that there will be loss of life, economics etc if it was intercepted but at the same time I would like it to be secure. The data being transferred is simply like the data that would be transferred on Twitter or Facebook - not overly confidential but still should be kept private. What is the best way to secure this data? Am I best to use HTTP Basic Auth over SSL or should I be looking into something like OAuth. I have never really used REST much before so bit of a first for me. Thanks

    Read the article

  • Is it possible to bulk load an NDB child Entity in GAE?

    - by hmacread
    At some point in the future I may need to bulk load migration data (i.e. from a CSV). Has anyone had exceptions raised doing the following? Also is there any change in behaviour if the ndb.put_multi() function is used? from google.appengine.ext import ndb while True: if not id: break id, name = read_csv_row(readline()) x = X(parent=ndb.Key('Y','static_id') x.id, x.name = id, name x.put() class X(ndb.Model): id = StringProperty() name = StringProperty() class Y(ndb.Model): pass def read_csv_row(line): """returns tuple"""

    Read the article

  • Filter Datagrid onLoad

    - by Morgan Delvanna
    My data grid successfully filters when I select a month from a dropdown list, however when I try to filter it onLoad it just doesn't filter. The dropdown successfully displays the current month, and the grid should also show the current month data. <script type="text/javascript"> dojo.require("dojox.grid.DataGrid"); dojo.require("dojox.data.XmlStore"); dojo.require("dijit.form.FilteringSelect"); dojo.require("dojo.data.ItemFileReadStore"); dojo.require("dojo.date"); theMonth = new Date(); dojo.addOnLoad(function() { dojo.byId('monthInput').value = month_name[(theMonth.getMonth()+1)]; var filterString='{month: "' + theMonth.getMonth() + '"}'; var filterObject=eval('('+filterString+')'); dijit.byId("eventGrid").filter(filterObject); } ); var eventStore = new dojox.data.XmlStore({url: "events.xml", rootItem: "event", keyAttribute: "dateSort"}); function monthClick() { var ctr, test, rtrn; test = dojo.byId('monthInput').value; for (ctr=0;ctr<=11;ctr++) { if (test==month_name[ctr]) { rtrn = ctr; } } var filterString='{month: "' + rtrn + '"}'; var filterObject=eval('('+filterString+')'); eventGrid.setQuery(filterObject); } </script> </head> <body class="tundra"> <div id="header" dojoType="dijit.layout.ContentPane" region="top" class="pfga"></div> <div id="menu" dojoType="dijit.layout.ContentPane" region="left" class="pfga"></div> <div id="content" style="width:750px; overflow:visible" dojoType="dijit.layout.ContentPane" region="center" class="pfga"> <div dojotype="dojo.data.ItemFileReadStore" url="months.json" jsID="monthStore"></div> <div id="pagehead" class="Heading1" >Upcoming Events</div> <p> <input dojoType="dijit.form.FilteringSelect" store="monthStore" searchAttr="month" name="id" id="monthInput" class="pfga" onChange="monthClick()" /> </p> <table dojoType="dojox.grid.DataGrid" store="eventStore" class="pfga" style="height:500px; width:698px" clientSort="true" jsID="eventGrid"> <thead> <tr> <th field="date" width="80px">Date</th> <th field="description" width="600px">Description</th> </tr> <tr> <th field="time" colspan="2">Details</th> </tr> </thead> </table> </div> <div id="footer"></div>

    Read the article

  • Using reshape + cast to aggregate over multiple columns

    - by DamonJW
    In R, I have a data frame with columns for Seat (factor), Party (factor) and Votes (numeric). I want to create a summary data frame with columns for Seat, Winning party, and Vote share. For example, from the data frame df <- data.frame(party=rep(c('Lab','C','LD'),times=4), votes=c(1,12,2,11,3,10,4,9,5,8,6,15), seat=rep(c('A','B','C','D'),each=3)) I want to get the output seat winner voteshare 1 A C 0.8000000 2 B Lab 0.4583333 3 C C 0.5000000 4 D LD 0.5172414 I can figure out how to achieve this. But I'm sure there must be a better way, probably a cunning one-liner using Hadley Wickham's reshape package. Any suggestions? For what it's worth, my solution uses a function from my package djwutils_2.10.zip and is invoked as follows. But there are all sorts of special cases it doesn't deal with, so I'd rather rely on someone else's code. aggregateList(df, by=list(seat=seat), FUN=list(winner=function(x) x$party[which.max(x$votes)], voteshare=function(x) max(x$votes)/sum(x$votes)))

    Read the article

  • Auto Submitting a form (cURL)

    - by cast01
    Im writing a form to post data off to paypal, and this works fine, i create the form with hidden fields and then have a submit button that submits everything to paypal. However, when the user clicks that button, there is more that i want to do, for example change their cart status in the database. So, i want to be able to execute some code when they click submit and then post the data to paypal. I dont want to use javascript for this. My method at the moment is using cURL, the form posts back to the current page, i then check for $_POST data, do my other commands like updating the status of the cart, and then create a curl command, and post the form data to paypal. Now, it gets posted successfully, but the browser doesnt go off to paypal. At first i was just retirieving the result in a string and then echoing it out and i could see that the post was successful, then i set CURLOPT_FOLLOWLOCATION to 1 assuming this would let the browser go off to paypal but it doesnt, what it seems to do is grab some stuff from paypal and put it on my page. Is using cURL the right thing for this? and if so, how do i get round this problem? I want to stay away from javascript for this as only users with javascript enabled would have their cart updated. The code im using for curl is below, the post_data is an array i created and then read key-value pairs into post_string. //Set cURL Options curl_setopt($curl_connection, CURLOPT_CONNECTTIMEOUT, 30); curl_setopt($curl_connection, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"); curl_setopt($curl_connection, CURLOPT_RETURNTRANSFER, false); curl_setopt($curl_connection, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($curl_connection, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($curl_connection, CURLOPT_MAXREDIRS, 20); //Set Data to be Posted curl_setopt($curl_connection, CURLOPT_POSTFIELDS, $post_string); //Execute Request curl_exec($curl_connection);

    Read the article

  • How can I efficiently manipulate 500k records in SQL Server 2005?

    - by cdeszaq
    I am getting a large text file of updated information from a customer that contains updates for 500,000 users. However, as I am processing this file, I often am running into SQL Server timeout errors. Here's the process I follow in my VB application that processes the data (in general): Delete all records from temporary table (to remove last month's data) (eg. DELETE * FROM tempTable) Rip text file into the temp table Fill in extra information into the temp table, such as their organization_id, their user_id, group_code, etc. Update the data in the real tables based on the data computed in the temp table The problem is that I often run commands like UPDATE tempTable SET user_id = (SELECT user_id FROM myUsers WHERE external_id = tempTable.external_id) and these commands frequently time out. I have tried bumping the timeouts up to as far as 10 minutes, but they still fail. Now, I realize that 500k rows is no small number of rows to manipulate, but I would think that a database purported to be able to handle millions and millions of rows should be able to cope with 500k pretty easily. Am I doing something wrong with how I am going about processing this data? Please help. Any and all suggestions welcome.

    Read the article

  • Passing a web service an unknown number of parameters

    - by Billyhole
    I'm relatively new to utilizing web services. I'm trying to create one that will be accepting data from a ASP.Net form whose input controls are created dynamically at runtime and I don't how many control values will be getting passed. I'm thinking I'll be using jQuery's serialize() on the form to get the data, but what do I have the web service accept for a parameter? I thought maybe I could use serializeArray(), but still I don't know what type of variable to accept for the JavaScript array. Finally, I was thinking that I might need to create a simple data transfer object with the data before sending it along to the web service. I just didn't wanna go through with the DTO route if there was a much simpler way or an established best practice that I should follow. Thanks in advance for any direction you can provide and let me know I wasn't clear enough, or if you have any questions.

    Read the article

  • jquery $.ajax not working in firefox against rails (406 response) (works in chrome & IE)

    - by phil swenson
    I have a rails backend and am testing the following jquery code against it: var content = $("#notification_content").val(); var data = new Object(); data.content = content; $.ajax({ url: "/notifications/detect_type.json", type:"POST", data: data, success: function(result ){updateTypeDropDown(result)}}); This code works fine in Chrome and IE. However in Firefox (using Firebug), I see this: http://localhost:3000/notifications/detect_type.json 406 Not Acceptable here is a chrome request in the log: Processing NotificationsController#detect_type (for 127.0.0.1 at 2010-12-21 17:05:59) [POST] Parameters: {"action"="detect_type", "content"="226 south emerson denver co 80209", "controller"="notifications"} User Columns (2.0ms) SHOW FIELDS FROM users User Load (37.4ms) SELECT * FROM users WHERE (users.id = '1') LIMIT 1 Completed in 58ms (View: 1, DB: 40) | 406 Not Acceptable [http://localhost/notifications/detect_type.json] here is a firefox request in the log: Processing NotificationsController#detect_type (for 127.0.0.1 at 2010-12-21 17:06:41) [POST] Parameters: {"action"="detect_type", "content"="226 south emerson 80209", "controller"="notifications"} User Columns (2.1ms) SHOW FIELDS FROM users User Load (30.4ms) SELECT * FROM users WHERE (users.id = '1') LIMIT 1 Completed in 100ms (View: 1, DB: 33) | 200 OK [http://localhost/notifications/detect_type.json] I'm stumped. Ideas?

    Read the article

  • How do I perform this XPath query with Linq?

    - by John Hansen
    In the following code I am using XPath to find all of the matching nodes using XPath, and appending the values to a StringBuilder. StringBuilder sb = new StringBuilder(); foreach (XmlNode node in this.Data.SelectNodes("ID/item[@id=200]/DAT[1]/line[position()>1]/data[1]/text()")) { sb.Append(node.Value); } return sb.ToString(); How do I do the same thing, except using Linq to XML instead? Assume that in the new version, this.Data is an XElement object.

    Read the article

  • Google Map Web Service working on local but not online

    - by Julien
    Hi all, I have a problem with a Javascript request to the Google Map Api Web Service : if I have the HTML file on my computer it works, but it doesn't work online. Here's the code : url = 'http://maps.google.com/maps/api/geocode/json?address=Senador+Francisco+Quindimil+Y+Carabobo+Por+Carabobo,Ciudad+Autonoma+de+Buenos+Aires,Argentina&sensor=false'; $.get(url, function(data) { $('#result').html(data); alert('Load was performed.'); }, 'text'); This sample is just supposed to load the "data" in the "result" element. When it is offline, the "data" has text, but not when it is online. Sample here : Online Web Service Test Could one of you guys help ? Thanks a lot !

    Read the article

  • Rollback doesn't work in MySQLdb

    - by Anton Barycheuski
    I have next code ... db = MySQLdb.connect(host=host, user=user, passwd=passwd, db=db, charset='utf8', use_unicode=True) db.autocommit(False) cursor = db.cursor() ... for col in ws.columns[1:]: data = (col[NUM_ROW_GENERATION].value, 1, type_topliv_dict[col[NUM_ROW_FUEL].value]) fullgeneration_id = data[0] type_topliv = data[2] if data in completions_set: compl_id = completions_dict[data] else: ... sql = u"INSERT INTO completions (type, mark, model, car_id, type_topliv, fullgeneration_id, mark_id, model_id, production_period, year_from, year_to, production_period_url) VALUES (1, '%s', '%s', 0, %s, %s, %s, %s, '%s', '%s', '%s', '%s')" % (marks_dict[mark_id], models_dict[model_id], type_topliv, fullgeneration_id, mark_id, model_id, production_period, year_from, year_to, production_period.replace(' ', '_').replace(u'?.?.', 'nv') ) inserted_completion += cursor.execute(sql) cursor.execute("SELECT fullgeneration_id, type, type_topliv, id FROM completions where fullgeneration_id = %s AND type_topliv = %s" % (fullgeneration_id, type_topliv)) row = cursor.fetchone() compl_id = row[3] if is_first_car: deleted_compl_rus = cursor.execute("delete from compl_rus where compl_id = %s" % compl_id) for param, row_id in params: sql = u"INSERT INTO compl_rus (compl_id, modification, groupparam, param, paramvalue) VALUES (%s, '%s', '%s', '%s', %s)" % (compl_id, col[NUM_ROW_MODIFICATION].value, param[0], param[1], col[row_id].value) inserted_compl_rus += cursor.execute(sql) is_first_car = False db.rollback() print '\nSTATISTICS:' print 'Inserted completion:', inserted_completion print 'Inserted compl_rus:', inserted_compl_rus print 'Deleted compl_rus:', deleted_compl_rus ans = raw_input('Commit changes? (y/n)') db.close() I has manually deleted records from table and than run script two times. See https://dpaste.de/MwMa . I think, that rollback in my code doesn't work. Why?

    Read the article

  • Line graph disappears after new line is added

    - by tonystinge
    I am having trouble uploading a large .csv file to my HIGHCHART graph. I've been able to graph an 85KB up to 1486 lines by 9 (I) columns. Here is an example: TimeStamp Temp_1_01 Temp_1_02 Temp_2_01 Temp_2_02 Temp_3_01 Temp_3_02 Temp_4_01 Temp_4_02 5/15/2014, 3:25 408 487 63 84 67 91 63 78 5/15/2014 3:30 408 487 63 84 67 91 63 78 5/15/2014 3:35 407 489 63 84 67 91 63 78 5/15/2014 3:40 408 488 63 84 67 91 63 78 5/15/2014 3:44 408 488 63 84 67 91 63 78 ... 5/22/2014 9:40 483 421 0 93 76 95 72 89 When I add a new line the line graph disappears. Any suggestions? Here is the javascript: $.get('Dropbox/geo/sites/GC_Room/loveland.csv', function(data) { // Split the lines var lines = data.split('\n'); var i = 0; var csvData = []; // Iterate over the lines and add categories or series $.each(lines, function(lineNo, line) { csvData[i] = line.split(','); i = i + 1; }); var columns = csvData[0]; var categories = [], series = []; for(var colIndex=0,len=columns.length; colIndex<len; colIndex++) { //first row data as series's name var seriesItem= { data:[], name:csvData[0][colIndex] }; for(var rowIndex=1,rowCnt=csvData.length; rowIndex<rowCnt; rowIndex++) { //first column data as categories, if (colIndex == 0) { categories.push(csvData[rowIndex][0]); } else if(parseFloat(csvData[rowIndex][colIndex])) // <-- here { seriesItem.data.push(parseFloat(csvData[rowIndex][colIndex])); } }; //except first column if(colIndex>0)series.push(seriesItem); } // Create the chart var chart = new Highcharts.Chart( { chart: { alignTick: false, renderTo: 'LANE_METALS', type: 'line' }, title: { text: 'Monthly Average Temperature', x: -20 //center }, subtitle: { text: 'Source: LANE METALS', x: -20 }, xAxis: { categories: categories, labels: { step: 200, text: 'Time', }, tickWidth: 0 }, yAxis: { title: { text: 'Temperature (\xB0C)' }, min: 0 }, tooltip: { formatter: function() { return '<b>'+ this.series.name +'</b><br/>'+ this.x +': '+ this.y +'\xB0C'; } }, legend: { layout: 'vertical', //backgroundColor: '#FFFFFF', //floating: true, align: 'left', //x: 100, verticalAlign: 'top', //y: 70, borderWidth: 0 }, plotOptions: { area: { turboThreshold: 0, stacking: 'normal', lineColor: '#666666', lineWidth: 1, marker: { lineWidth: 1, lineColor: '#666666' } } }, series: series }); });

    Read the article

  • how to access camera.java in on cick event?

    - by Srikanth Naidu
    hi , i am making a app which takes photo on button click i have camera.java which operates camera and takes photo how to i call it on the below event? public void onClick(DialogInterface arg0, int arg1) { setContentView(R.layout.startcamera); } Camera .java package neuro.com; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import android.app.Activity; import android.hardware.Camera; import android.hardware.Camera.PictureCallback; import android.hardware.Camera.ShutterCallback; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.FrameLayout; public class CameraDemo extends Activity { private static final String TAG = "CameraDemo"; Camera camera; Preview preview; Button buttonClick; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.startcamera); preview = new Preview(this); ((FrameLayout) findViewById(R.id.preview)).addView(preview); buttonClick = (Button) findViewById(R.id.buttonClick); buttonClick.setOnClickListener( new OnClickListener() { public void onClick(View v) { preview.camera.takePicture(shutterCallback, rawCallback, jpegCallback); } }); Log.d(TAG, "onCreate'd"); } ShutterCallback shutterCallback = new ShutterCallback() { public void onShutter() { Log.d(TAG, "onShutter'd"); } }; /** Handles data for raw picture */ PictureCallback rawCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { Log.d(TAG, "onPictureTaken - raw"); } }; /** Handles data for jpeg picture */ PictureCallback jpegCallback = new PictureCallback() { public void onPictureTaken(byte[] data, Camera camera) { FileOutputStream outStream = null; try { // write to local sandbox file system // outStream = CameraDemo.this.openFileOutput(String.format("%d.jpg", System.currentTimeMillis()), 0); // Or write to sdcard outStream = new FileOutputStream(String.format("/sdcard/%d.jpg", System.currentTimeMillis())); outStream.write(data); outStream.close(); Log.d(TAG, "onPictureTaken - wrote bytes: " + data.length); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { } Log.d(TAG, "onPictureTaken - jpeg"); } }; }

    Read the article

  • How to set buffer size in client-server app using sockets?

    - by nelly
    First of all i am new to networking so i may say dumb thing in here. Considering a client-server application using sockets(.net with c# if that matters). The client sends some data, the server process it and sends back a string. The client sends some other data, the serve process it, queries the db and sends back several hundreds of items from the database The client sends some other type of data and the server notifies some other clients . My question is how to set the buffer size correctly for reading/writing operation. Should i do something like this: byte[] buff = new byte[client.ReceiveBufferSize] ? I am thinking of something like this: Client sends data to the server(and the server will follow the same pattern) byte[] bytesToSend=new byte[2048] //2048 to be standard for any command send by the client bytes 0..1 ->command type bytes 1..2047 ->command parameters byte[] bytesToReceive=new byte[8]/byte[64]/byte[8192] //switch(command type) But..what is happening when a client is notified by the server without sending data? What is the correct way to accomplish what i am trying to do? Thanks for reading.

    Read the article

  • Why is my Android app force closing when I try to check if an EditText has a double

    - by user336861
    Scanner scanner = new Scanner(lapsPerMile_st); if (!scanner.hasNextDouble()) { Context context = getApplicationContext(); String msg = "Please Enter Digits and Decmials Only"; int duration = Toast.LENGTH_LONG; Toast.makeText(context, msg, duration).show(); lapsPerMileEditText.setText(""); return; } else { //Edit box has only digits, Set data and display stats data.setLapsPerMile(Integer.parseInt(lapsPerMile_st)); lapsRunLabel.setVisibility(0); lapsRunTextView.setText(Integer.toString(data.getLapsRun())); milesRunLabel.setVisibility(0); milesRunTextView.setText(Double.toString(data.getLapsRun()/data.getLapsPerMile())); } <EditText android:id="@+id/mileCount" android:layout_width="100dp" android:layout_height="wrap_content" android:layout_marginTop="110dp" android:inputType="numberDecimal" android:maxLength="4" /> For some reason if I enter a non decimal number such as 3, or 5, it works fine but when I enter a floating point such as 3.4 or 5.8 it force closes. I cant seem to figure out whats going on. Any ideas? Thanks

    Read the article

  • Conditional references in .NET project, possible to get rid of warning?

    - by Lasse V. Karlsen
    I have two references to a SQLite assembly, one for 32-bit and one for 64-bit, which looks like this (this is a test project to try to get rid of the warning, don't get hung up on the paths): <Reference Condition=" '$(Platform)' == 'x64' " Include="System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=AMD64"> <SpecificVersion>True</SpecificVersion> <HintPath>..\..\LVK Libraries\SQLite3\version_1.0.65.0\64-bit\System.Data.SQLite.DLL</HintPath> </Reference> <Reference Condition=" '$(Platform)' == 'x86' " Include="System.Data.SQLite, Version=1.0.65.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139, processorArchitecture=x86"> <SpecificVersion>True</SpecificVersion> <HintPath>..\..\LVK Libraries\SQLite3\version_1.0.65.0\32-bit\System.Data.SQLite.DLL</HintPath> </Reference> This produces the following warning: Warning 1 The referenced component 'System.Data.SQLite' could not be found. Is it possible for me to get rid of this warning? One way I've looked at it to just configure my project to be 32-bit when I develop, and let the build machine fix the reference when building for 64-bit, but this seems a bit awkward and probably prone to errors. Any other options? The reason I want to get rid of it is that the warning is apparently being picked up by TeamCity and periodically flagged as something I need to look into, so I'd like to get completely rid of it.

    Read the article

  • How can chunks be allocated in a node.js stream in object mode all at once?

    - by Quentin Engles
    I can see how buffers, and strings can be sent as chunks, but I'm having a problem thinking about how streams can be dealt when working in object mode. Say I have a byte stream from an http request message. I want to take that message, parse, and then transform it into one big object. I already know how to parse the message. What I'm wondering is if the message is big so it has many chunks, but I want to make one object for the output how can I make sure the data event waits for the whole thing? Is this just a matter of not using the push method until the chunked data has finished being sent? That would then restrict the stream data output to a smaller object which I think I'm fine with for now. As an added condition the larger data will be reduced in size after the the transform.

    Read the article

  • choose append to existing backup instead of overwrite

    - by aron
    Hello, I have a database and I made it's first backup 2 days ago. Then yesterday I spent an entire adding new records. This morning I ran a backup, (but I selected append to existing backup set) as pictured below. I just ran a restore and I found that it wiped out all my data from yesterday and it restored it from the backup of 2 days ago. Not the version from this mornings backup. I zipped this backup file to be safe. I changed some data in the DB, Then I ran the back up again, but this time I selected "overwrite all existing backup sets" Now when I restore the db it's seems to restore the data from the backup correctly. I think I learned a lesson here, correctly if I'm wrong My questions is, Did I lose an entire day of work? I still have this morning's backup .bak file safe in a zip. Is there anyway I can restore is with the right data?

    Read the article

  • Facebook - generating fb tag in Jquery - not working

    - by Gublooo
    Hey guys Kind of stuck here - I have this functionality on my site where I display 10 user comments along with their photo and when a user clicks on "More" - using Jquery more results are fetched from DB and the results are displayed via string generated in the Jquery method. This is a shortened version of my Jquery function $(".more_swipes").live('click',function() { var profile_id = 'profile-user_id; ?'; $.getJSON("/profile/more-swipes", { user_id: profile_id},function(swipes) { $.each(swipes, function(i,data){ newcomment="<div"; newcomment +="<fb:profile-pic uid='"+data.fb_userid+"' linked='false'/"; newcomment +="Name="+data.user_name+"-Comment="+data.comment; $("#moreswipes").append(newcomment); }); }); return false; }); Here as you can see I'm using the tag <fb:profile-pic uid='"+data.fb_userid+"' linked='false'/ Now when more results get displayed, the profile pic of the user is not getting displayed - when I look at the source code - this is how the fb tag looks when its generated thru Jquery <fb:profile-pic uid="222222" linked="false" height="50" width="50" Whereas the FB tags not generated thru the Jquery code look like <fb:profile-pic uid="222222" linked="false" height="50" width="50" class="FB_profile_pic <fb_profile_pic_rendered FB_ElementReady" style="width:px;height:50px" <img src="http://profile.atk......2025.jpg" alt="User Name" title="User Name" style="width:px;height:50px" class="FB_profile_pic"/ </fb:profile-pic So when I add it as a String in JQuery function - facebook is not identifying it and executing it Any idea how to fix this Thanks a bunch

    Read the article

  • Matlab GUI: How to Save the Results of Functions (states of application)

    - by niko
    Hi, I would like to create an animation which enables the user to go backward and forward through the steps of simulation. An animation has to simulate the iterative process of channel decoding (a receiver receives a block of bits, performs an operation and then checks if the block corresponds to parity rules. If the block doesn't correspond the operation is performed again and the process finally ends when the code corresponds to a given rules). I have written the functions which perform the decoding process and return a m x n x i matrix where m x n is the block of data and i is the iteration index. So if it takes 3 iterations to decode the data the function returns a m x n x 3 matrix with each step is stired. In the GUI (.fig file) I put a "decode" button which runs the method for decoding and there are buttons "back" and "forward" which have to enable the user to switch between the data of recorded steps. I have stored the "decodedData" matrix and currentStep value as a global variable so by clicking "forward" and "next" buttons the indices have to change and point to appropriate step states. When I tried to debug the application the method returned the decoded data but when I tried to click "back" and "next" the decoded data appeared not to be declared. Does anyone know how is it possible to access (or store) the results of the functions in order to enable the described logic which I want to implement in Matlab GUI?

    Read the article

  • Which network protocol to use for lightweight notification of remote apps (Delphi 2005)

    - by Chris Thornton
    I have this situation.... Client-initiated SOAP 1.1 communication between one server and let's say, tens of thousands of clients. Clients are external, coming in through our firewall, authenticated by certificate, https, etc.. They can be anywhere, and usually have their own firewalls, NAT routers, etc... They're truely external, not just remote corporate offices. They could be in a corporate/campus network, DSL/Cable, even Dialup. Currently, clients push new data to the server and pull new data from the server on 15-minute polling loop. The server currently does not push data - the client hits the "messagecount" method, to see if there is new data to pull. If 0, it sleeps for another 15 min and checks again. We're trying to get that down to 7 seconds. If this were an internal app, with one or just a few dozen clients, we'd write a cilent "listener" soap service, and would push data to it. But since they're external, sit behind their own firewalls, and sometimes private networks behind NAT routers, this is not practical. So we're left with polling on a much quicker loop. 10K clients, each checking their messagecount every 10 seconds, is going to be 1000/sec messages that will mostly just waste bandwidth, server, firewall, and authenticator resources. So I'm trying to design something better than what would amount to a self-inflicted DoS attack. I don't think it's practical to have the server send soap messages to the client (push) as this would require too much configuration at the client end. But I think there are alternatives that I don't know about. Such as: 1) Is there a way for the client to make a request for GetMessageCount() via Soap 1.1, and get the response, and then perhaps, "stay on the line" for perhaps 5-10 minutes to get additional responses in case new data arrives? i.e the server says "0", then a minute later in response to some SQL trigger (the server is C# on Sql Server, btw), knows that this client is still "on the line" and sends the updated message count of "5"? 2) Is there some other protocol that we could use to "ping" the client, using information gathered from their last GetMessageCount() request? 3) I don't even know. I guess I'm looking for some magic protocol where the client can send a GetMessageCount() request, which would include info for "oh by the way, in case the answer changes in the next hour, ping me at this address...". Also, I'm assuming that any of these "keep the line open" schemes would seriously impact the server sizing, as it would need to keep many thousands of connections open, simultaneously. That would likely impact the firewalls too, I think. Is there anything out there like that? Or am I pretty much stuck with polling? TIA, Chris

    Read the article

< Previous Page | 719 720 721 722 723 724 725 726 727 728 729 730  | Next Page >