Search Results

Search found 81445 results on 3258 pages for 'file command'.

Page 854/3258 | < Previous Page | 850 851 852 853 854 855 856 857 858 859 860 861  | Next Page >

  • Java binary files writeUTF... explain specifications...

    - by user69514
    I'm studying Java on my own. One of the exercises is the following, however I do not really understand what it is asking to.... any smart java gurus out there that could explain this in more detail and simple words? Thanks Suppose that you have a binary file that contains numbers whos type is either int or double. You dont know the order of the numbers in the file, but their order is recorded in a string at the begining of the file. The string is composed of the letters i for int, and d for double, in the order of the types of the subsequent numbers. The string is written using the method writeUTF. For example the string "iddiiddd" indicated that the file contains eight values, as follows: one integer, followed by two doubles, followed by two integers, followed by three doubles. Read this binary file and create a new text file of the values written one to a line.

    Read the article

  • Batch build using IAR tools

    - by Jim Tshr
    I am trying to do a batch build of a project using IAR tools. The processor is a CC2530, and it builds fine in the IDE. I have followed the documentation for batch build (Project/Batch Build) and created a .cspy file that is suppose to be my batch file, but in the comments in that file it indicates that I need a debug file (.ubrof) to execute with it. I can't find a .ubrof file and I have searched the whole project directory structure. Also, I want my batch build to build a production version without the debugging information. Where do I get a .ubrof file? How do I do a production batch build using IAR tools?

    Read the article

  • Using XStream to deserialize an XML response with separate "success" and "failure" forms?

    - by Chris Markle
    I am planning on using XStream with Java to convert between objects and XML requests and XML responses and objects, where the XML is flowing over HTTP/HTTPS. On the response side, I can get a "successful" response, which seems like it would map to one Java class, or a "failure" response, which seems like it would map to another Java class. For example, for a "file list" request, I could get an affirmative response e.g., <?xml version="1.0" encoding="UTF-8"?> <response> <success>true</success> <files> <file>[...]</file> <file>[...]</file> <file>[...]</file> </files> </response> or I could get a negative response e.g., <?xml version="1.0" encoding="UTF-8"?> <response> <success>false</success> <error> <errorCode>-502</errorCode> <systemMessage>[...]AuthenticationException</systemMessage> <userMessage>Not authenticated</userMessage> </error> </response> To handle this, should I include fields in one class for both cases or should I somehow use XStream to "conditionally" create one of the two potential classes? The case with fields from both response cases in the same object would look something like this: Class Response { boolean success; ArrayList<File> files; ResponseError error; [...] } Class File { String name; long size; [...] } Class ResponseError { int errorCode; String systemMessage; String userMessage; [...] } I don't know what the "use XStream and create different objects in case of success or error" looks like. Is it possible to do that somehow? Is it better or worse way to go? Anyway, any advice on how to handle using XStream to deal with this success vs. failure response case would be appreciated. Thanks in advance!

    Read the article

  • Copy Structure To Another Program

    - by Steven
    Long story, long: I am adding a web interface (ASPX.NET: VB) to a data acquisition system developed with LabVIEW which outputs raw data files. These raw data files are the binary representation of a LabVIEW cluster (essentially a structure). LabVIEW provides functions to instantiate a class or structure or call a method defined in a .NET DLL file. I plan to create a DLL file containing a structure definition and a class with methods to transfer the structure. When the webpage requests data, it would call a LabVIEW executable with a filename parameter. The LabVIEW code would instantiate the structure, populate the structure from the data file, then call the method to transfer the data back to the website. Long story, short: How do you recommend I transfer (copy) an instance of a structure from one .NET program to a VB.NET program? Ideas considered: sockets, temp file, xml file, config file, web services, CSV, some type of serialization, shared memory

    Read the article

  • Install TurboGears on windows xp

    - by coder
    I've been trying to get TurboGears installed on Windows by following this site. I've installed virtualenv but when I execute the command "virtualenv --no-site-packages testproj", I get the following message: New python executable in testproj\Scripts\python.exe Traceback (most recent call last): File "C:\Python26\Scripts\virtualenv-script.py", line 8, in load_entry_point('virtualenv==1.4.5', 'console_scripts', 'virtualenv')() File "C:\Python26\lib\site-packages\virtualenv-1.4.5-py2.6.egg\virtualenv.py", line 529, in main use_distribute=options.use_distribute) File "C:\Python26\lib\site-packages\virtualenv-1.4.5-py2.6.egg\virtualenv.py", line 612, in create_environment site_packages=site_packages, clear=clear)) File "C:\Python26\lib\site-packages\virtualenv-1.4.5-py2.6.egg\virtualenv.py", line 837, in install_python stdout=subprocess.PIPE) File "C:\Python26\lib\subprocess.py", line 621, in __init__ errread, errwrite) File "C:\Python26\lib\subprocess.py", line 830, in _execute_child startupinfo) WindowsError: [Error 14001] This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem Can someone help me debug this ? If any one knows a better tutorial to install turbogears, please let me know.

    Read the article

  • .net List<string>.Remove bug Should I submit a MS Connect bug report on this?

    - by Dean Lunz
    So I was beating my head against a wall for a while before it dawned on me. I have some code that saves a list of names into a text file ala ... System.IO.File.WriteAllLines(dlg.FileName, this.characterNameMasterList.Distinct().ToArray()); The character names can contain special characters. These names come from the wow armory at www.wowarmory.com There are about 26000 names or so saved in the .txt file. The names get saved to the .txt file just fine. I wrote another application that reads these names from that .txt file using this code // download the names from the db var webNames = this.DownloadNames("character"); // filter names and get ones that need to be added to the db var localNames = new List<string>(System.IO.File.ReadAllLines(dlg.FileName)); foreach (var name in webNames) { if (localNames.Contains(name.Trim())) localNames.Remove(name); } return localNames; ... the code downloads a list of names from my website that are already in the db. Then reads the local .txt file and singles out every name that is not yet in the db so it can later add it. The names that get read from the .txt file also get read just fine with no problems. The problem comes in when removing names from the localNames list. localNames is a List type. As soon as localNames.Remove(name) gets called any names in the list that had special characters in them would get corrupted and be converted into ? characters. See for screen cap http://yfrog.com/12badcharsp So i tried doing it another way using ... // download the names from web that are already in the db var webNames = this.DownloadNames("character"); // filter names and get ones that need to be added to the db var localNames = new List<string>(System.IO.File.ReadAllLines(dlg.FileName)); int index = 0; while (index < webNames.Count) { var name = webNames[index++]; var pos = localNames.IndexOf(name.Trim()); if (pos != -1) localNames.RemoveAt(pos); } return localNames; .. But using localNames.RemoveAt also corrupts the items in the list converting special characters into ?. So is this a known bug with the List.remove methods? Does anyone know? Has anyone else had this problem? I also used .NET Reflector to disassemble/inspect the list.remove and list.RemoveAt code and it appear to be calling some external Copy function. Aside from the fact that this is prob not the best way to get a unique list of items from 2 lists am I missing something or should be aware of when using the List.Remove methods ? I am running windows 7 vs2010 and my app is set for .net 4 (no client profile )

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Writing simple flash plugin for flowplayer

    - by danwoods
    Hello all, I'm trying to write a simple plugin for the flash player flowplayer (documentation for writing a plugin for flowplayer can be found here). I'm new to flash and I think I'm having a problem connecting the .fla file to the .as file when compiling into a .swf. As it is, when I include the plugin, the player doesn't show up. I've set the .fla's document class to the .as file and added the .as file to the .fla's publishing classpath. The .as file can be found here and the .fla file can be found here Any ideas?

    Read the article

  • Difference between URLLIB2 call in IDLE and from Django?

    - by danspants
    The following piece of code works as expected when running in a local install of django apache 2.2 fx = urllib2.Request(f); fx.add_header('User-Agent','Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.19 (KHTML, like Gecko) Chrome/1.0.154.36 Safari/525.19'); url_opened = urllib2.urlopen(fx); However when I enter that code into IDLE on the same machine I get the following error: url_opened = urllib2.urlopen(fx); File "C:\Python25\lib\urllib2.py", line 124, in urlopen return _opener.open(url, data) File "C:\Python25\lib\urllib2.py", line 387, in open response = meth(req, response) File "C:\Python25\lib\urllib2.py", line 498, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python25\lib\urllib2.py", line 425, in error return self._call_chain(*args) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 506, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) HTTPError: HTTP Error 407: Proxy Authentication Required Any ideas?

    Read the article

  • How to use infinit live streams with JAVE library? (Java, ffmpeg)

    - by Ole Jak
    So I want to use JAVE to save mp3 radio stream to my File system. I have this code for file saving but what shall I do to save a stream (stop on timer for ex) File source = new File("source.wav"); File target = new File("target.mp3"); AudioAttributes audio = new AudioAttributes(); audio.setCodec("libmp3lame"); audio.setBitRate(new Integer(128000)); audio.setChannels(new Integer(2)); audio.setSamplingRate(new Integer(44100)); EncodingAttributes attrs = new EncodingAttributes(); attrs.setFormat("mp3"); attrs.setAudioAttributes(audio); Encoder encoder = new Encoder(); encoder.encode(source, target, attrs);

    Read the article

  • Testing a Django view cause "AttributeError: 'NoneType' object has no attribute 'handler500'" error

    - by jack
    I just wanted to start testing a Django view using the code below: from django.test.client import Client c = Client() response = c.get('/search/keyword') print response.content It just throws out following error message: "/usr/local/lib/python2.6/dist-packages/django/test/client.py", line 286, in get response = self.request(**r) File "/usr/local/lib/python2.6/dist-packages/django/test/client.py", line 230, in request response = self.handler(environ) File "/usr/local/lib/python2.6/dist-packages/django/test/client.py", line 74, in __call__ response = self.get_response(request) File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 143, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 178, in handle_uncaught_exception callback, param_dict = resolver.resolve500() File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py", line 268, in resolve500 return self._resolve_special('500') File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py", line 258, in _resolve_special callback = getattr(self.urlconf_module, 'handler%s' % view_type) AttributeError: 'NoneType' object has no attribute 'handler500' The view works in browser. What's wrong with above code?

    Read the article

  • resolve maya crash

    - by knishua
    Hi, I have a file which is crashing while rendering. File contains 50+ reference nodes. polycount : (lks)2,59,49,150; textures : (ths)2,628. It is being rendered in mental ray. All textures are .map. The attached image has the machine cofiguration with page file. here is the log file when the file crashes. what is the line in the log file mean. How to solve this. Exception code: C0000006: IN_PAGE_ERROR Fault address: 7814E420 in C:\WINDOWS\WinSxS\amd64_Microsoft.VC80.CRT_1fc8b3b 9a1e18e3b_8.0.50727.762_x-ww_9D1C6CE0\MSVCR80.dll what does the above mean. how to resolve this. Brgds, kNish

    Read the article

  • django internationalization and translations problem

    - by Zayatzz
    I have a problem with django translations. Problem 1 - i updated string in django.po file, but the change does not appear on the webpage. Problem 2 - i have created my own locale file with django-admin.py makemessages -l et, added the translation string into file, but they too do not appear on the page. I do not think this is setting problem, because the translations from django.po file do appear on the website, its just the changes and the translations from my own generated file that do not appear. Edit: My settings.py contains this: gettext = lambda s: s LANGUAGE_CODE = 'et' LANGUAGES = ( ('et', gettext('Estonian')), ) my own locale files are in /path/to/project/locale/et/LC_MESSAGES/ and the files are django.mo and django.po the file i refer to in problem 1 is django own et transaltion, which i changed.

    Read the article

  • cannot edit any php files using specific functions

    - by user458474
    I cannot update any txt files using php. When I write a simple code like the following: <?php // create file pointer $fp = fopen("C:/Users/jj/bob.txt", 'w') or die('Could not open file, or fike does not exist and failed to create.'); $mytext = '<b>hi. This is my test</b>'; // write text to file fwrite($fp, $mytext) or die('Could not write to file.'); $content = file("C:/Users/jj/bob.txt"); // close file fclose($fp); ?> Both files do exist in the folder. I just cannot see any updates on bob.txt. Is this a permission error in windows? It works fine on my laptop at home. I also cannot change the php files on my website, using filezilla.

    Read the article

  • Adobe Air upload progress without FileReference

    - by anhtuannd
    I'm deploying a small application with Adobe Air. My application will do batch upload from filepath which stored in a text file. For example, in a text file name "list.txt", there is a string "C:\myfiles\IMG_0001.JPG". Now I want to upload this image file, keep tracking of upload progress :-< I want to use FileReference to get the upload progress, but I don't know how to import from file's path. I also wonder how to use FileReference to upload this file without prompting a dialog for user to select file. Thank you so much :)

    Read the article

  • Configuration manager for PHP

    - by Jack
    I am working on code re-factoring of configuration file loading part in PHP. Earlier I was using multiple 'ini' files but now I plan to go for single XML file which will be containing all configuration details of the project. Problem is, if somebody wants configuration file in ini or DB or anything else and not the default one (in this case XML), my code should handle that part. If somebody wants to go for other configuration option like ini, he will have to create ini file similar to my XML configuration file and my configuration manager should take care everything like parsing, storing in cache. For that I need a mechanism lets say proper interface for my configuration data where the underlying data store can be anything( XML, DB, ini etc) also I don't want it to be dependent on these underlying store and anytime in future this should be extensible to other file formats.

    Read the article

  • Automatically deleting pyc files when corresponding py is moved (Mercurial)

    - by Oddthinking
    (I foresaw this problem might happen 3 months ago, and was told to be diligent to avoid it. Yesterday, I was bitten by it, hard, and now that it has cost me real money, I am keen to fix it.) If I move one of my Python source files into another directory, I need to remember to tell Mercurial that it moved (hg move). When I deploy the new software to my server with Mercurial, it carefully deletes the old Python file and creates it in the new directory. However, Mercurial is unaware of the pyc file in the same directory, and leaves it behind. The old pyc is used preferentially over new python file by other modules in the same directory. What ensues is NOT hilarity. How can I persuade Mercurial to automatically delete my old pyc file when I move the python file? Is there another better practice? Trying to remember to delete the pyc file from all the Mercurial repositories isn't working.

    Read the article

  • What's a reliable and practical way to protect software with a user license ?

    - by Frank
    I know software companies use licenses to protect their softwares, but I also know there are keygen programs to bypass them. I'm a Java developer, if I put my program online for sale, what's a reliable and practical way to protect it ? How about something like this, would it work ? <1> I use ProGuard to protect the source code. <2> Sign the executable Jar file. <3> Since my Java program only need to work on PC [I need to use JDIC in it], I wrap the final executable Jar into an .exe file which makes it harder to decompile. <4> When a user first downloads and runs my app, it checks for a Pass file on his PC. <5> If the Pass file doesn't exist, run the app in demo mode, exits in 5 minutes. <6> When demo exits a panel opens with a "Buy Now" button. This demo mode repeats forever unless step <7> happens. <7> If user clicks the "Buy Now" button, he fills out a detailed form [name, phone, email ...], presses a "Verify Info" button to save the form to a Pass file, leaving license Key # field empty in this newly generated Pass file. <8> Pressing "Verify Info" button will take him to a html form pre-filled with his info to verify what he is buying, also hidden in the form's input filed is a license Key number. He can now press a "Pay Now" button to goto Paypal to finish the process. <9> The hidden license Key # will be passed to Paypal as product Id info and emailed to me. <10> After I got the payment and Paypal email, I'll add the license Key # to a valid license Key list, and put it on my site, only I know the url. The list is updated hourly. <11> Few hours later when the user runs the app again, it can find the Pass file on his PC, but the license Key # value is empty, so it goes to the valid list url to see if its license Key # is on the list, if so, write the license Key # into the Pass file, and the next time it starts again, it will find the valid license Key # and start in purchased mode without exiting in 5 minutes. <12> If it can't find its license Key # on the list from my url, run in demo mode. <13> In order to prevent a user from copying and using another paid user's valid Pass file, the license Key # is unique to each PC [I'm trying to find how], so a valid Pass file only works on one PC. Only after a user has paid will Paypal email me the valid license Key # with his payment. <14> The Id checking goes like this : Use the CPU ID : "CPU_01-02-ABC" for example, encrypt it to the result ID : "XeR5TY67rgf", and compare it to the list on my url, if "XeR5TY67rgf" is not on my valid user list, run in demo mode. If it exists write "XeR5TY67rgf" into the Pass File license field. In order to get a unique license Key, can I use his PC's CPU Id ? Or something unique and useful [ relatively less likely to change ]. If so let's say this CPU ID is "CPU_01-02-ABC", I can encrypt it to something like "XeR5TY67rgf", and pass it to Paypal as product Id in the hidden html form field, then I'll get it from Paypal's email notification, and add it to the valid license Key # list on the url. So, even if a hacker knows it uses CPU Id, he can't write it into the Pass file field, because only encrypted Ids are valid Ids. And only my program knows how to generate the encrypted Ids. And even if another hacker knows the encrypted Id is hidden in the html form input field, as long as it's not on my url list, it's still invalid. Can anyone find any flaw in the above system ? Is it practical ? And most importantly how do I get hold of this unique ID that can represent a user's PC ? Frank

    Read the article

  • Parsing names with pyparsing

    - by johnthexiii
    I have a file of names and ages, john 25 bob 30 john bob 35 Here is what I have so far from pyparsing import * data = ''' john 25 bob 30 john bob 35 ''' name = Word(alphas + Optional(' ') + alphas) rowData = Group(name + Suppress(White(" ")) + Word(nums)) table = ZeroOrMore(rowData) print table.parseString(data) the output I am expecting is [['john', 25], ['bob', 30], ['john bob', 35]] Here is the stacktrace Traceback (most recent call last): File "C:\Users\mccauley\Desktop\client.py", line 11, in <module> eventType = Word(alphas + Optional(' ') + alphas) File "C:\Python27\lib\site-packages\pyparsing.py", line 1657, in __init__ self.name = _ustr(self) File "C:\Python27\lib\site-packages\pyparsing.py", line 122, in _ustr return str(obj) File "C:\Python27\lib\site-packages\pyparsing.py", line 1743, in __str__ self.strRepr = "W:(%s)" % charsAsStr(self.initCharsOrig) File "C:\Python27\lib\site-packages\pyparsing.py", line 1735, in charsAsStr if len(s)>4: TypeError: object of type 'And' has no len()

    Read the article

  • How do i serve shortcuts/.lnk from web servers?

    - by acidzombie24
    without using a database i wanted a file to point to the newest revision of a file. Someone suggested using a shortcut. Knowing i can rewrite file.ext to file.ext.lnk i thought it was a great idea. Then i tried it, my server (VS 2010rc) serves the shortcut rather then the file. Not what i wanted... How do i serve the file the shortcut is pointing to? NOTE: I am planing to use windows 2008 as my server so a solution should work on that as well. The OS i am running is windows 7.

    Read the article

  • How to apply a free third party CA and set up Tomcat SSL with it

    - by lenny
    These days I tried to apply a free third pary CA ( www.cacert.org & www.freeca.cn ) and then set up Tomcat SSL with the CA. My purpose is to eliminate the "Certificate Error" page when accessing https://... from a client browser. But it's a little hard for me to get around it. My steps to apply a free CA, from www.freeca.cn I used keytool to generate a cer file with command: keytool -genkey ... // Generate a key keytool -certreq ... // Generate a cert file and then I got some code from the cert file, and paste onto www.freeca.cn to generate a cer file. Then I imported the cer file keytool -import -alias abc -file MyABC.cer -keystore mykeystorefile.store And then I set up the mykeystorefile.store into tomcat /conf/server.xml, but it didn't work, sill pop "Certificate Error" page when trying to access https://.... Can someone help me? Thanks

    Read the article

  • Reloading Rails Directories on Change for Dev: Not Lib!

    - by yar
    I have checked out several questions on this, including all of those you see next to the question. Unfortunately, I'm not working with a plugin, and I don't want to work in lib. I have a directory called File.join(Rails.root, 'classes') and I'd like the classes in this directory to reload automatically in dev. In my environment.rb I have this line config.load_paths << File.join(Rails.root, 'classes') which works fine and blows up if the path isn't there. The reloading line in my development.rb also works fine require_dependency File.join(Rails.root, 'classes', 'blah.rb') which blows up if the file is not there (a good sign). However, the file doesn't reload. This all works if the file is in the root of lib and I use the require_dependency line, but my whole point is to get stuff out of lib as suggested here.

    Read the article

  • Vim: Delete Buffer When Quitting Split Window

    - by Rafid K. Abdullah
    I have this very useful function in my .vimrc: function! MyGitDiff() !git cat-file blob HEAD:% > temp/compare.tmp diffthis belowright vertical new edit temp/compare.tmp diffthis endfunction What it does is basically opening the file I am currently working on from repository in a vertical split window, then compare with it. This is very handy, as I can easily compare changes to the original file. However, there is a problem. After finishing the compare, I remove the split window by typing :q. This however doesn't remove the buffer from the buffer list and I can still see the compare.tmp file in the buffer list. This is annoying because whenever I make new compare, I get this message: Warning: File "temp/compare.tmp" has changed since editing started. Is there anyway to delete the file from buffers as well as closing the vertical split window?

    Read the article

  • How can I save an NSDocument concurrently?

    - by Paperflyer
    I have a document based application. Saving the document can take a few seconds, so I want to enable the user to continue using the program while it saves the document in the background. Due to the document architecture, my application is asked to save to a temporary location and that temporary file is then copied over the old file. However, this means that I can not just run my file saving code in the background and return way before it is done, since the temporary file has to be written completely before it can be copied. Is there a way to disable this temporary-file-behavior or otherwise enable file saving in the background?

    Read the article

  • where are the log files saved in axis2 webservice

    - by KItis
    i have put log4j.properties file into WEB-INF/classes folder in my axis 2 webservice. now i can see logs been printed on console. but i have also put file appender. but i can not find the log file anywhere. could someone help me to find a solution for this problem. log4j.rootLogger=DEBUG, CA, FA #Console Appender log4j.appender.CA=org.apache.log4j.ConsoleAppender log4j.appender.CA.layout=org.apache.log4j.PatternLayout log4j.appender.CA.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n #File Appender log4j.appender.FA=org.apache.log4j.FileAppender log4j.appender.FA.File=ws.log log4j.appender.FA.layout=org.apache.log4j.PatternLayout log4j.appender.FA.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n # Set the logger level of File Appender to WARN log4j.appender.FA.Threshold = WARN

    Read the article

< Previous Page | 850 851 852 853 854 855 856 857 858 859 860 861  | Next Page >