Search Results

Search found 64995 results on 2600 pages for 'data import'.

Page 713/2600 | < Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >

  • How to handle different roles with different previliges in PHP?

    - by user261002
    I would like to know how we can create different "user Roles" for different users in PHP. example: Administrator can create all types of users, add, view, manipulate data, delete managers, viewers, and workers, etc managers can only create, workers and viewers, can add and view data, workers can't create new users, but can only add data and view data, viewers can only view data that has been added to the DB by workers, managers and administrators. I though its better to use different sessions like : $_SESSION['admin'] $_SESSION['manager'] $_SESSION['worker'] $_SESSION['viewvers'] and for every page check which of them have a true or yes value, but I want to know how do they do it in real and big projects??? is there any other way???

    Read the article

  • Flash crash on long JSON load time

    - by MooCow
    I have a Flex program that gets a JSON array from a PHP script. The PHP script doesn't contain just a simple JSON array but it grabs data from Activecollab and do some work on the data before encoding the data. The first test involve a small JSON array that took a short time to encode by PHP. However, when I try to scale up the test, the Flash movie will crash trying to load the JSON data from PHP. There's no code difference between the tests, just the amount of data and amount of time it takes PHP to encode. Am I looking at a memory problem or a time out problem?

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • Windows debugging - WinDbg

    - by Santhosh77
    Hi, I got the following error while debuggging a process with its core dump. 0:000 !lmi test.exe Loaded Module Info: [test.exe] Module: test Base Address: 00400000 Image Name: test.exe Machine Type: 332 (I386) Time Stamp: 4a3a38ec Thu Jun 18 07:54:04 2009 Size: 27000 CheckSum: 54c30 Characteristics: 10f Debug Data Dirs: Type Size VA Pointer MISC 110, 0, 21000 [Debug data not mapped] FPO 50, 0, 21110 [Debug data not mapped] CODEVIEW 31820, 0, 21160 [Debug data not mapped] - Can't validate symbols, if present. Image Type: FILE - Image read successfully from debugger. test.exe Symbol Type: CV - Symbols loaded successfully from image path. Load Report: cv symbols & lines Does any body know what the error "CODEVIEW 31820, 0, 21160 [Debug data not mapped] - Can't validate symbols, if present." really mean? Is this error meant that i can't read public/private symbols from the executable? If it is not so, why does the WinDbg debugger throws this typr of error? Thanks in advance, Santhosh.

    Read the article

  • Problem in generation of custom classes at web service client

    - by user443324
    I have a web service which receives an custom object and returns another custom object. It can be deployed successfully on GlassFish or JBoss. @WebMethod(operationName = "providerRQ") @WebResult(name = "BookingInfoResponse" , targetNamespace = "http://tlonewayresprovidrs.jaxbutil.rakes.nhst.com/") public com.nhst.rakes.jaxbutil.tlonewayresprovidrs.BookingInfoResponse providerRQ(@WebParam(name = "BookingInfoRequest" , targetNamespace = "http://tlonewayresprovidrq.jaxbutil.rakes.nhst.com/") com.nhst.rakes.jaxbutil.tlonewayresprovidrq.BookingInfoRequest BookingInfoRequest) { com.nhst.rakes.jaxbutil.tlonewayresprovidrs.BookingInfoResponse BookingInfoResponse = new com.nhst.rakes.jaxbutil.tlonewayresprovidrs.BookingInfoResponse(); return BookingInfoResponse; } But when I create a client for this web service, two instances of BookingInfoRequest and BookingInfoResponse generated even I need only one instance. This time an error is returned that says multiple classes with same name are can not be possible....... Here is wsdl..... <?xml version='1.0' encoding='UTF-8'?><!-- Published by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is JAX-WS RI 2.2.1-hudson-28-. --><!-- Generated by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is JAX-WS RI 2.2.1-hudson-28-. --><definitions xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wsp1_2="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tns="http://demo/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="http://demo/" name="DemoJAXBParamService"> <wsp:Policy wsu:Id="DemoJAXBParamPortBindingPolicy"> <ns1:OptimizedMimeSerialization xmlns:ns1="http://schemas.xmlsoap.org/ws/2004/09/policy/optimizedmimeserialization" /> </wsp:Policy> <types> <xsd:schema> <xsd:import namespace="http://tlonewayresprovidrs.jaxbutil.rakes.nhst.com/" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=1" /> </xsd:schema> <xsd:schema> <xsd:import namespace="http://tlonewayresprovidrs.jaxbutil.rakes.nhst.com" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=2" /> </xsd:schema> <xsd:schema> <xsd:import namespace="http://tlonewayresprovidrq.jaxbutil.rakes.nhst.com/" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=3" /> </xsd:schema> <xsd:schema> <xsd:import namespace="http://tlonewayresprovidrq.jaxbutil.rakes.nhst.com" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=4" /> </xsd:schema> <xsd:schema> <xsd:import namespace="http://demo/" schemaLocation="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService?xsd=5" /> </xsd:schema> </types> <message name="providerRQ"> <part name="parameters" element="tns:providerRQ" /> </message> <message name="providerRQResponse"> <part name="parameters" element="tns:providerRQResponse" /> </message> <portType name="DemoJAXBParam"> <operation name="providerRQ"> <input wsam:Action="http://demo/DemoJAXBParam/providerRQRequest" message="tns:providerRQ" /> <output wsam:Action="http://demo/DemoJAXBParam/providerRQResponse" message="tns:providerRQResponse" /> </operation> </portType> <binding name="DemoJAXBParamPortBinding" type="tns:DemoJAXBParam"> <wsp:PolicyReference URI="#DemoJAXBParamPortBindingPolicy" /> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" style="document" /> <operation name="providerRQ"> <soap:operation soapAction="" /> <input> <soap:body use="literal" /> </input> <output> <soap:body use="literal" /> </output> </operation> </binding> <service name="DemoJAXBParamService"> <port name="DemoJAXBParamPort" binding="tns:DemoJAXBParamPortBinding"> <soap:address location="http://localhost:31133/DemoJAXBParamService/DemoJAXBParamService" /> </port> </service> </definitions> So, I want to know that how to generate only one instance(I don't know why two instances are generated at client side?). Please help me to move in right direction.

    Read the article

  • Type patterns in Haskell

    - by finnsson
    I'm trying to compile a simple example of generic classes / type patterns (see http://www.haskell.org/ghc/docs/latest/html/users_guide/generic-classes.html) in Haskell but it won't compile. Any ideas about what's wrong with the code would be helpful. According to the documentation there should be a module Generics with the data types Unit, :*:, and :+: but ghc (6.12.1) complaints about Not in scope: data constructor 'Unit' etc. It seems like there's a package instant-generics with the data types :*:, :+: and U but when I import that module (instead of Generics) I get the error Illegal type pattern in the generic bindings {myPrint _ = ""} The complete source code is import Generics.Instant class MyPrint a where myPrint :: a -> String myPrint {| U |} _ = "" myPrint {| a :*: b |} (x :*: y) = "" (show x) ++ ":*:" ++ (show y) myPrint {| a :+: b |} _ = "" data Foo = Foo String instance MyPrint a => MyPrint a main = myPrint $ Foo "hi" and I compile it using ghc --make Foo.hs -fglasgow-exts -XGenerics -XUndecidableInstances P.S. The module Generics export no data types, only the functions: canDoGenerics mkGenericRhs mkTyConGenericBinds validGenericInstanceType validGenericMethodType

    Read the article

  • Multiple memcached servers question.

    - by Andre
    hypothetically - if I have multiple memcached servers like this: //PHP $MEMCACHE_SERVERS = array( "10.1.1.1", //web1 "10.1.1.2", //web2 "10.1.1.3", //web3 ); $memcache = new Memcache(); foreach($MEMCACHE_SERVERS as $server){ $memcache->addServer ( $server ); } And then I set data like this: $huge_data_for_frong_page = 'some data blah blah blah'; $memcache->set("huge_data_for_frong_page", $huge_data_for_frong_page); And then I retrieve data like this: $huge_data_for_frong_page = $memcache->get("huge_data_for_frong_page"); When i would to retrieve this data from memcached servers - how would php memcached client know which server to query for this data? Or is memcached client going to query all memcached servers?

    Read the article

  • LINQ - Linq to Sql - Specified cast is not valid - Please Help!

    - by thiag0
    I am trying to do the following... Request request = ( from r in db.Requests where r.Status == "Processing" && r.Locked == false select r).SingleOrDefault(); It is throwing the following exception... Message: Specified cast is not valid. StackTrace: at System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) at System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute[S](Expression expression) at System.Linq.Queryable.SingleOrDefault[TSource](IQueryable`1 source) at GDRequestProcessor.Worker.GetNextRequest() The .DBML file schema matches my database table that I am trying to select from so I have no clue why I am having this problem. Can anyone help me? Thanks in advance!

    Read the article

  • ASP.Net MVC 2 / EF 4 Reference Issue

    - by Eric J.
    My ASP.Net MVC 2 project references a Domain project where POCO business objects are defined and a Data project where EF 4 POCO persistence is implemented. Things were running well until I had a little fussiness with my version control provider (rollback to previous version left me with merge conflicts). Now, upon launching the MVC 2 project, I get a runtime error: The type 'System.Data.Objects.DataClasses.IEntityWithKey' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. However, every project references System.Data.Entity (same version). If I remove the reference to System.Data.Entity from the MVC 2 project, I get the same message as a compile-time error. I'm pretty sure something got messed up when I had the version control issue, but really not sure where to look for this one.

    Read the article

  • RSA implementations for Java, alternative to BC

    - by Tom Brito
    The RSA implementation that ships with Bouncy Castle only allows the encrypting of a single block of data. The RSA algorithm is not suited to streaming data and should not be used that way. In a situation like this you should encrypt the data using a randomly generated key and a symmetric cipher, after that you should encrypt the randomly generated key using RSA, and then send the encrypted data and the encrypted random key to the other end where they can reverse the process (ie. decrypt the random key using their RSA private key and then decrypt the data). I can't use the workarond of using symmetric key. So, are there other implementations of RSA than Bouncy Castle?

    Read the article

  • Problem calling stored procedure with a fixed length binary parameter using Entity Framework

    - by Dave
    I have a problem calling stored procedures with a fixed length binary parameter using Entity Framework. The stored procedure ends up being called with 8000 bytes of data no matter what size byte array I use to call the function import. To give some example, this is the code I am using. byte[] cookie = new byte[32]; byte[] data = new byte[2]; entities.Insert("param1", "param2", cookie, data); The parameters are nvarchar(50), nvarchar(50), binary(32), varbinary(2000) When I run the code through SQL profiler, I get this result. exec [dbo].[Insert] @param1=N'param1',@param2=N'param2',@cookie=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 [SNIP because of 16000 zeros] ,@data=0x0000 All parameters went through ok other than the binary(32) cookie. The varbinary(2000) seemed to work fine and the correct length was maintained. Is there a way to prevent the extra data being sent to SQL server? This seems like a big waste of network resource.

    Read the article

  • MySQL get row closest to NOW()

    - by Christopher McCann
    I have a table with User data such as name, address etc and another table which has a paragraph of text about the user. The reason that they are separate is because we need to record all the old about data. So if the user changes their paragraph - the old one should still be stored. Each bit of about data has a primary key aboutMeID. What I want to do is have a join that pulls their name, address etc and the latest bit of aboutMe data from the other table. I am not sure though how I can order the join to only get the latest about me data. Can someone help?

    Read the article

  • C++ converting binary(P5) image to ascii(P2) image (.pgm)

    - by tubby
    I am writing a simple program to convert grayscale binary (P5) to grayscale ascii (P2) but am having trouble reading in the binary and converting it to int. #include <iostream> #include <fstream> #include <sstream> using namespace::std; int usage(char* arg) { // exit program cout << arg << ": Error" << endl; return -1; } int main(int argc, char* argv[]) { int rows, cols, size, greylevels; string filetype; // open stream in binary mode ifstream istr(argv[1], ios::in | ios::binary); if(istr.fail()) return usage(argv[1]); // parse header istr >> filetype >> rows >> cols >> greylevels; size = rows * cols; // check data cout << "filetype: " << filetype << endl; cout << "rows: " << rows << endl; cout << "cols: " << cols << endl; cout << "greylevels: " << greylevels << endl; cout << "size: " << size << endl; // parse data values int* data = new int[size]; int fail_tracker = 0; // find which pixel failing on for(int* ptr = data; ptr < data+size; ptr++) { char t_ch; // read in binary char istr.read(&t_ch, sizeof(char)); // convert to integer int t_data = static_cast<int>(t_ch); // check if legal pixel if(t_data < 0 || t_data > greylevels) { cout << "Failed on pixel: " << fail_tracker << endl; cout << "Pixel value: " << t_data << endl; return usage(argv[1]); } // if passes add value to data array *ptr = t_data; fail_tracker++; } // close the stream istr.close(); // write a new P2 binary ascii image ofstream ostr("greyscale_ascii_version.pgm"); // write header ostr << "P2 " << rows << cols << greylevels << endl; // write data int line_ctr = 0; for(int* ptr = data; ptr < data+size; ptr++) { // print pixel value ostr << *ptr << " "; // endl every ~20 pixels for some readability if(++line_ctr % 20 == 0) ostr << endl; } ostr.close(); // clean up delete [] data; return 0; } sample image - Pulled this from an old post. Removed the comment within the image file as I am not worried about this functionality now. When compiled with g++ I get output: $> ./a.out a.pgm filetype: P5 rows: 1024 cols: 768 greylevels: 255 size: 786432 Failed on pixel: 1 Pixel value: -110 a.pgm: Error The image is a little duck and there's no way the pixel value can be -110...where am I going wrong? Thanks.

    Read the article

  • How do I ignore the UTF-8 Byte Order Marker in String comparisons?

    - by Skrud
    I'm having a problem comparing strings in a Unit Test in C# 4.0 using Visual Studio 2010. This same test case works properly in Visual Studio 2008 (with C# 3.5). Here's the relevant code snippet: byte[] rawData = GetData(); string data = Encoding.UTF8.GetString(rawData); Assert.AreEqual("Constant", data, false, CultureInfo.InvariantCulture); While debugging this test, the data string appears to the naked eye to contain exactly the same string as the literal. When I called data.ToCharArray(), I noticed that the first byte of the string data is the value 65279 which is the UTF-8 Byte Order Marker. What I don't understand is why Encoding.UTF8.GetString() keeps this byte around. How do I get Encoding.UTF8.GetString() to not put the Byte Order Marker in the resulting string?

    Read the article

  • Add a loading graphic jquery

    - by sea_1987
    I am using the jquery ajax api so load in some content, the code looks like this, $.ajax({ type:"POST", url:"/search/location", data: getQuery, success:function(data){ //alert(getQuery); //console.log(data); $('body.secEmp').html(data); //overwrite current data setUpRegionCheckBoxes(); //fire function again to reload DOM } }); In my HTML i have <div id="loading">Loading Content</div> this is has a css style on it of display:none, while my ajax is bring in the content I want to show the div that is hidden, but I cant find a away too have tried attaching .ajaxStart on my loading div then doing show() but that did not work. Any advice?

    Read the article

  • PHP: best practice. Do i save html tags in DB or store the html entity value?

    - by Matt
    Hi Guys, I was wondering about which way i should do the following. I am using the tiny MCE wysiwyg editor which formats the users data with the right html tags. Now, i need to save this data entered into the editor into a database table. Should i encode the html tags to their corresponding entities when inserting into the DB, then when i get the data back from the table, not have the encode it for XSS purposes but i'd still have to use eval for the html tags to format the text. OR Do i save the html tags into the database, then when i get the data back from the database encode the html tags to their entities, but then as the tags will appear to the user, i'd have to use the eval function to actually format the data as it was entered. My thoughts are with the first option, i just wondered on what you guys thought. Thanks M

    Read the article

  • pass php array to jquery with getJSON

    - by robertdd
    i want to pass a php aray to jQuery: $.getimagesarr = function() { $.getJSON('operations.php', {'operation':'getimglist'}, function(data){ var arr = new Array(); arr = data; return arr; }); } var data = $.getimagesarr(); if (data){ jQuery.each(data, function(i, val) { .... }); } it return undefined in php i have this: function getimglist(){ $results = $_SESSION['files']; echo json_encode($results); } it is possible?

    Read the article

  • Optimizing BeautifulSoup (Python) code

    - by user283405
    I have code that uses the BeautifulSoup library for parsing, but it is very slow. The code is written in such a way that threads cannot be used. Can anyone help me with this? I am using BeautifulSoup for parsing and than save into a DB. If I comment out the save statement, it still takes a long time, so there is no problem with the database. def parse(self,text): soup = BeautifulSoup(text) arr = soup.findAll('tbody') for i in range(0,len(arr)-1): data=Data() soup2 = BeautifulSoup(str(arr[i])) arr2 = soup2.findAll('td') c=0 for j in arr2: if str(j).find("<a href=") > 0: data.sourceURL = self.getAttributeValue(str(j),'<a href="') else: if c == 2: data.Hits=j.renderContents() #and few others... c = c+1 data.save() Any suggestions? Note: I already ask this question here but that was closed due to incomplete information.

    Read the article

  • Insert multiple rows into temp table with one command in SQL2005

    - by Adam Haile
    I've got some data in the following format: -1,-1,-1,-1,701,-1,-1,-1,-1,-1,304,390,403,435,438,439,442,455 I need to insert it into a temp table like this: CREATE TABLE #TEMP ( Node int ) So that I can use it in a comparison with data in another table. The data above represents separate rows of the "Node" column. Is there an easy way to insert this data, all in one command? Also, the data will actually being coming in as seen, as a string... so I need to be able to just concat it into the SQL query string. I can obviously modify it first if needed.

    Read the article

  • Python: Unpack arbitary length bits for database storage

    - by sberry2A
    I have a binary data format consisting of 18,000+ packed int64s, ints, shorts, bytes and chars. The data is packed to minimize it's size, so they don't always use byte sized chunks. For example, a number whose min and max value are 31, 32 respectively might be stored with a single bit where the actual value is bitvalue + min, so 0 is 31 and 1 is 32. I am looking for the most efficient way to unpack all of these for subsequent processing and database storage. Right now I am able to read any value by using either struct.unpack, or BitBuffer. I use struct.unpack for any data that starts on a bit where (bit-offset % 8 == 0 and data-length % 8 == 0) and I use BitBuffer for anything else. I know the offset and size of every packed piece of data, so what is going to be the fasted way to completely unpack them? Many thanks.

    Read the article

  • Generating 8000 text files from xml files

    - by Ray
    Hi all, i need to generate the same number of text files as the xml files i have. Within the text files, i need the title and maybe some other tags of it. I can generate text files with the elements i wanted but not all xml files can be generated. Only some of them are generated. Something might be wrong with my parser so help out please thanks. This is my code. Please have a look and give me suggestions. Thanks in advance. import java.io.File; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import org.w3c.dom.*; import java.io.*; public class AccessingXmlFile1 { public static void main(String argv[]) { try { //File file = new File("C:\\MyFile.xml"); // create a file that is really a directory File aDirectory = new File("C:/Documents and Settings/I2R/Desktop/test"); // get a listing of all files in the directory String[] filesInDir = aDirectory.list(); System.out.println(""+filesInDir.length); // sort the list of files (optional) // Arrays.sort(filesInDir); //////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////// // have everything i need, just print it now for ( int a=0; a<filesInDir.length; a++ ) { String xmlFile = filesInDir[a]; String newLine = System.getProperty("line.separator"); File file = new File(xmlFile); DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document document = db.parse(file); document.getDocumentElement().normalize(); //System.out.println("Root element " + document.getDocumentElement().getNodeName()); NodeList node = document.getElementsByTagName("metadata"); System.out.println("Information of Xml File"); System.out.println(xmlFile.substring(0, xmlFile.length() - 4)); //////////////////////////////////////////////////////////////////////////////////// String titleStoreText = ""; String descriptionStoreText = ""; String collectionStoreText = ""; String textToWrite = ""; //////////////////////////////////////////////////////////////////////////////////// for (int i = 0; i < node.getLength(); i++) { Node firstNode = node.item(i); if (firstNode.getNodeType() == Node.ELEMENT_NODE) { Element element = (Element) firstNode; NodeList titleElementList = element.getElementsByTagName("title"); Element titleElement = (Element) titleElementList.item(0); NodeList title = titleElement.getChildNodes(); //////////////////////////////////////////////////////////////////////////////////// if(titleElement == null) titleStoreText = " There is no title for this file."+ newLine; else titleStoreText = titleStoreText+((Node) title.item(0)).getNodeValue() + newLine; //titleStoreText = titleStoreText+((Node) title.item(0)).getNodeValue()+ newLine; //////////////////////////////////////////////////////////////////////////////////// System.out.println("Title : " + titleStoreText); NodeList collectionElementList = element.getElementsByTagName("collection"); Element collectionElement = (Element) collectionElementList.item(0); NodeList collection = collectionElement.getChildNodes(); //////////////////////////////////////////////////////////////////////////////////// if(collectionElement == null) collectionStoreText = " There is no collection for this file."+ newLine; else collectionStoreText = collectionStoreText+((Node) collection.item(0)).getNodeValue() + newLine; //collectionStoreText = collectionStoreText+((Node) collection.item(0)).getNodeValue()+ newLine; //////////////////////////////////////////////////////////////////////////////////// System.out.println("Collection : " + collectionStoreText); NodeList descriptionElementList = element.getElementsByTagName("description"); Element descriptionElement = (Element) descriptionElementList.item(0); NodeList description = descriptionElement.getChildNodes(); //////////////////////////////////////////////////////////////////////////////////// if(descriptionElement == null) descriptionStoreText = " There is no description for this file."+ newLine; else descriptionStoreText = descriptionStoreText+((Node) description.item(0)).getNodeValue() + newLine; //descriptionStoreText = descriptionStoreText+((Node) description.item(0)).getNodeValue() + newLine; //////////////////////////////////////////////////////////////////////////////////// System.out.println("Description : " + descriptionStoreText); //////////////////////////////////////////////////////////////////////////////////// textToWrite = "=====Title=====" + newLine + titleStoreText + newLine + "=====Collection=====" + newLine + collectionStoreText + newLine + "=====Description=====" + newLine + descriptionStoreText;// + newLine + "=====Subject=====" + newLine + subjectStoreText; //////////////////////////////////////////////////////////////////////////////////// } } ///////////////////////////////////////////write to file part is here///////////////////////////////////////// Writer output = null; File file2 = new File(xmlFile.substring(0, xmlFile.length() - 4)+".txt"); output = new BufferedWriter(new FileWriter(file2)); output.write(textToWrite); output.close(); System.out.println("Your file has been written"); //////////////////////////////////////////////////////////////////////////////////// } } catch (Exception e) { e.printStackTrace(); } } }

    Read the article

  • python: problem with dictionary get method default value

    - by goutham
    I'm having a new problem here .. CODE 1: try: urlParams += "%s=%s&"%(val['name'], data.get(val['name'], serverInfo_D.get(val['name']))) except KeyError: print "expected parameter not provided - "+val["name"]+" is missing" exit(0) CODE 2: try: urlParams += "%s=%s&"%(val['name'], data.get(val['name'], serverInfo_D[val['name']])) except KeyError: print "expected parameter not provided - "+val["name"]+" is missing" exit(0) see the diffrence in serverInfo_D[val['name']] & serverInfo_D.get(val['name']) code 2 fails but code 1 works the data serverInfo_D:{'user': 'usr', 'pass': 'pass'} data: {'par1': 9995, 'extraparam1': 22} val: {'par1','user','pass','extraparam1'} exception are raised for for data dict .. and all code in for loop which iterates over val

    Read the article

  • Javascript pass reference by value

    - by Carlos R. Batista
    Im having this weird reference issue when im trying to get a JSON file through query: var themeData; $.getJSON("json/sample.js", function(data) { themeData = data.theme; console.log(themeData.sample[0].description); }); console.log(themeData.sample[0].description); The first console.log works, the second doesnt. Im guessing because "data" already expired by the time the script gets there and themeData is just a mere pointer to "data". Is there a ways I can make sure themeData gets a duplicate of "data" and not just a pointer to it?

    Read the article

  • How to save position after reload DataGridView

    - by bobik
    this is my code: private void getData(string selectCommand) { string connectionString = @"Server=localhost;User=SYSDBA;Password=masterkey;Database=C:\data\test.fdb"; dataAdapter = new FbDataAdapter(selectCommand, connectionString); DataTable data = new DataTable(); dataAdapter.Fill(data); bindingSource.DataSource = data; } private void button1_Click(object sender, EventArgs e) { getData(dataAdapter.SelectCommand.CommandText); } private void Form1_Load(object sender, EventArgs e) { dataGridView1.DataSource = bindingSource; getData("SELECT * FROM cities"); } after reload data on button1 click, cell selection jumps on first column and scrollbars is reset. How to save position of DataGridView?

    Read the article

  • How to stream a WAV file?

    - by jonasb
    I'm writing an app where I record audio and upload the audio file over the web. In order to speed up the upload I want to start uploading before I've finished recording. The file I'm creating is a WAV file. My plan was to use multiple data chunks. So instead of the normal encoding (RIFF, fmt , data) I’m using (RIFF, fmt , data, data, ..., data). The first issue is that the RIFF header wants the total length of the whole file, but that is of course not known when streaming the audio (I’m now using an arbitrary number). The other problem is that I'm not sure if it's valid since Audacity doesn't recognise the file, and Windows Media Player opens the file but plays only a very small part. I've been reading WAV specs but haven’t found an answer. Any suggestions?

    Read the article

< Previous Page | 709 710 711 712 713 714 715 716 717 718 719 720  | Next Page >