Search Results

Search found 1513 results on 61 pages for 'unexpected'.

Page 47/61 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Every subsequent call to an Axis webservice fails

    - by cudiaco
    I've been having a strange issue with an Axis webservice which is called through the https protocol. Basically, when an invocation is made, the call goes through just fine. If the call is made again, the web service fails, returning me with the following message: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <soapenv:Fault> <faultcode>soapenv:Server.userException</faultcode> <faultstring>org.xml.sax.SAXParseException: Unexpected end of file after null</faultstring> <detail> <ns1:hostname xmlns:ns1="http://xml.apache.org/axis/">f0s0</ns1:hostname> </detail> </soapenv:Fault> </soapenv:Body> </soapenv:Envelope> When tested locally (through http), the service works perfectly fine. Has anyone encountered an issue like this before? Any help is appreciated.

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • Another C question

    - by maddy
    Hi all, I have a piece of code shown below #include <stdio.h> #include <stdlib.h> void Advance_String(char [2],int ); int Atoi_val; int Count22; int Is_Milestone(char [2],int P2); char String[2] = "00"; main() { while(1) { if(Is_Milestone(S,21==1) { if(atoi(S)==22) { Count_22 = Count_22 + 1; } } Atoi_val = atoi(S); Advance_String(S,Atoi_val); } } int Is_Milestone(char P1[2],int P2) { int BoolInit; char *Ptr = P1; int value = atoi(Ptr); BoolInit = (value > P2); return BoolInit; } void Advance_String(char P1[2],int Value) { if(Value!=7) { P1[1] = P1[1]+1; } else { P1[1] = '0'; P1[0] = P1[0]+1 ; } } Now my problem is Count22 never increments as the char increments never achieves the value 21 or above.Could anyone please tell me the reason for this unexpected behaviour?My question here is to find the value of Count22.Is there any problem with the code? Thanks and regards Maddy

    Read the article

  • int considered harmful?

    - by Chris Becke
    Working on code meant to be portable between Win32 and Win64 and Cocoa, I am really struggling to get to grips with what the @#$% the various standards committees involved over the past decades were thinking when they first came up with, and then perpetuated, the crime against humanity that is the C native typeset - char, short, int and long. On the one hand, as a old-school c++ programmer, there are few statements that were as elegant and/or as simple as for(int i=0; i<some_max; i++) but now, it seems that, in the general case, this code can never be correct. Oh sure, given a particular version of MSVC or GCC, with specific targets, the size of 'int' can be safely assumed. But, in the case of writing very generic c/c++ code that might one day be used on 16 bit hardware, or 128, or just be exposed to a particularly weirdly setup 32/64 bit compiler, how does use int in c++ code in a way that the resulting program would have predictable behavior in any and all possible c++ compilers that implemented c++ according to spec. To resolve these unpredictabilities, C99 and C++98 introduced size_t, uintptr_t, ptrdiff_t, int8_t, int16_t, int32_t, int16_t and so on. Which leaves me thinking that a raw int, anywhere in pure c++ code, should really be considered harmful, as there is some (completely c++xx conforming) compiler, thats going to produce an unexpected or incorrect result with it. (and probably be a attack vector as well)

    Read the article

  • How do I view the full content of a text or varchar(MAX) column in SQL Server 2008 Management Studio

    - by adamjford
    In this live SQL Server 2008 (build 10.0.1600) database, there's an Events table, which contains a text column named Details. (Yes, I realize this should actually be a varchar(MAX) column, but whoever set this database up did not do it that way.) This column contains very large logs of exceptions and associated JSON data that I'm trying to access through SQL Server Management Studio, but whenever I copy the results from the grid to a text editor, it truncates it at 43679 characters. I've read on various locations on the Internet that you can set your Maximum Characters Retrieved for XML Data in Tools > Options > Query Results > SQL Server > Results To Grid to Unlimited, and then perform a query such as this: select Convert(xml, Details) from Events where EventID = 13920 (Note that the data is column is not XML at all. CONVERTing the column to XML is merely a workaround I found from Googling that someone else has used to get around the limit SSMS has from retrieving data from a text or varchar(MAX) column.) However, after setting the option above, running the query, and clicking on the link in the result, I still get the following error: Unable to show XML. The following error happened: Unexpected end of file has occurred. Line 5, position 220160. One solution is to increase the number of characters retrieved from the server for XML data. To change this setting, on the Tools menu, click Options. So, any idea on how to access this data? Would converting the column to varchar(MAX) fix my woes?

    Read the article

  • Deserializing JSON in WCF throws xml errors in .Net 4.0

    - by Syg
    Hi there. I'm going slidely mad over here, maybe someone else can figure out what's going on here. I have a WCF service exposing a function using webinvoke, like so: [OperationContract] [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.Wrapped, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, UriTemplate = "registertokenpost" )] void RegisterDeviceTokenForYoumiePost(test token); The datacontract for the test class looks like this: [DataContract(Namespace="Zooma.Test", Name="test", IsReference=true)] public class test { string waarde; [DataMember(Name="waarde", Order=0)] public string Waarde { get { return waarde; } set { waarde = value; } } } When sending the following json message to the service, { "test": { "waarde": "bla" } } the trace log gives me errors (below). I have tried this with just a string instead of the datatype (void RegisterDeviceTokenForYoumiePost(string token); ) but i get the same error. All help is appreciated, can't figure it out. It looks like it's creating invalid xml from the json message, but i'm not doing any custom serialization here. The formatter threw an exception while trying to deserialize the message: Error in deserializing body of request message for operation 'RegisterDeviceTokenForYoumiePost'. Unexpected end of file. **Following elements are not closed**: waarde, test, root.</Message><StackTrace> at System.ServiceModel.Dispatcher.OperationFormatter.DeserializeRequest(Message message, Object[] parameters)

    Read the article

  • Baffled by differences between WPF BitmapEncoders

    - by DanM
    I wrote a little utility class that saves BitmapSource objects to image files. The image files can be either bmp, jpeg, or png. Here is the code: public class BitmapProcessor { public void SaveAsBmp(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new BmpBitmapEncoder()); } public void SaveAsJpg(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new JpegBitmapEncoder()); } public void SaveAsPng(BitmapSource bitmapSource, string path) { Save(bitmapSource, path, new PngBitmapEncoder()); } private void Save(BitmapSource bitmapSource, string path, BitmapEncoder encoder) { using (var stream = new FileStream(path, FileMode.Create)) { encoder.Frames.Add(BitmapFrame.Create(bitmapSource)); encoder.Save(stream); } } } Each of the three Save methods work, but I get unexpected results with bmp and jpeg. Png is the only format that produces an exact reproduction of what I see if I show the BitmapSource on screen using a WPF Image control. Here are the results: BMP - too dark JPEG - too saturated PNG - correct Why am I getting completely different results for different file types? I should note that the BitmapSource in my example uses an alpha value of 0.1 (which is why it appears very desaturated), but it should be possible to show the resulting colors in any image format. I know if I take a screen capture using something like HyperSnap, it will look correct regardless of what file type I save to. Here's a HyperSnap screen capture saved as a bmp: As you can see, this isn't a problem, so there's definitely something strange about WPF's image encoders. Do I have a setting wrong? Am I missing something?

    Read the article

  • Determining the color of a pixel in a bitmap using C# in a WPF app

    - by DanM
    The only way I found so far is System.Drawing.Bitmap.GetPixel(), but Microsoft has warnings for System.Drawing that are making me wonder if this is the "old way" to do it. Are there any alternatives? Here's what Microsoft says about the System.Drawing namespace. I also noticed that the System.Drawing assembly was not automatically added to the references when I created a new WPF project. System.Drawing Namespace The System.Drawing namespace provides access to GDI+ basic graphics functionality. More advanced functionality is provided in the System.Drawing.Drawing2D, System.Drawing.Imaging, and System.Drawing.Text namespaces. The Graphics class provides methods for drawing to the display device. Classes such as Rectangle and Point encapsulate GDI+ primitives. The Pen class is used to draw lines and curves, while classes derived from the abstract class Brush are used to fill the interiors of shapes. Caution Classes within the System.Drawing namespace are not supported for use within a Windows or ASP.NET service. Attempting to use these classes from within one of these application types may produce unexpected problems, such as diminished service performance and run-time exceptions. - http://msdn.microsoft.com/en-us/library/system.drawing.aspx

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • Can't get RDFlib to work on windows

    - by john
    I have installed RDFlib 3.0 and everything that is needed, but when I run the following code I get an error. The code below is from: http://code.google.com/p/rdflib/wiki/IntroSparql. I have tried for hours to fix this but with no success. Can please someone help? import rdflib rdflib.plugin.register('sparql', rdflib.query.Processor, 'rdfextras.sparql.processor', 'Processor') rdflib.plugin.register('sparql', rdflib.query.Result, 'rdfextras.sparql.query', 'SPARQLQueryResult') from rdflib import ConjunctiveGraph g = ConjunctiveGraph() g.parse("http://bigasterisk.com/foaf.rdf") g.parse("http://www.w3.org/People/Berners-Lee/card.rdf") from rdflib import Namespace FOAF = Namespace("http://xmlns.com/foaf/0.1/") g.parse("http://danbri.livejournal.com/data/foaf") [g.add((s, FOAF['name'], n)) for s,_,n in g.triples((None, FOAF['member_name'], None))] for row in g.query( """SELECT ?aname ?bname WHERE { ?a foaf:knows ?b . ?a foaf:name ?aname . ?b foaf:name ?bname . }""", initNs=dict(foaf=Namespace("http://xmlns.com/foaf/0.1/"))): print "%s knows %s" % row The error I get is: Traceback (most recent call last): File "...", line 18 in <module> initNs=dict(foaf=Namespace("http://xmlns.com/foaf/0.1/"))): TypeError: query() got an unexpected keyword argument 'initNS'

    Read the article

  • Why are my events being called more than once?

    - by Arms
    In my Flash project I have a movieclip that has 2 keyframes. Both frames contain 1 movieclip each. frame 1 - Landing frame 2 - Game The flow of the application is simple: User arrives on landing page (frame 1) User clicks "start game" button User is brought to the game page (frame 2) When the game is over, the user can press a "play again" button which brings them back to step 1 Both Landing and Game movieclips are linked to separate classes that define event listeners. The problem is that when I end up back at step 1 after playing the game, the Game event listeners fire twice for their respective event. And if I go through the process a third time, the event listeners fire three times for every event. This keeps happening, so if I loop through the application flow 7 times, the event listeners fire seven times. I don't understand why this is happening because on frame 1, the Game movieclip (and I would assume its related class instance) does not exist - but I'm clearly missing something here. I've run into this problem in other projects too, and tried fixing it by first checking if the event listeners existed and only defining them if they didn't, but I ended up with unexpected results that didn't really solve the problem. I need to ensure that the event listeners only fire once. Any advice & insight would be greatly appreciated, thanks!

    Read the article

  • How to fix "OutOfMemoryError: java heap space" while compiling MonoDroid App in MonoDevelop

    - by Rodja
    When I try to compile one of my projects, I recently get the following error: Tool /usr/bin/java execution started with arguments: -jar /Applications/android-sdk-mac_x86/platform-tools/lib/dx.jar --no-strict --dex --output=obj/Debug/android/bin/classes.dex obj/Debug/android/bin/classes /Developer/MonoAndroid/usr/lib/mandroid/platforms/android-8/mono.android.jar FlurryAnalytics/Jars/FlurryAgent.jar Jars/android-support-v4.jar UNEXPECTED TOP-LEVEL ERROR: java.lang.OutOfMemoryError: Java heap space at com.android.dx.rop.code.RegisterSpecSet.<init>(RegisterSpecSet.java:49) at com.android.dx.rop.code.RegisterSpecSet.mutableCopy(RegisterSpecSet.java:383) at com.android.dx.ssa.LocalVariableInfo.mutableCopyOfStarts(LocalVariableInfo.java:169) at com.android.dx.ssa.LocalVariableExtractor.processBlock(LocalVariableExtractor.java:104) at com.android.dx.ssa.LocalVariableExtractor.doit(LocalVariableExtractor.java:90) at com.android.dx.ssa.LocalVariableExtractor.extract(LocalVariableExtractor.java:56) at com.android.dx.ssa.SsaConverter.convertToSsaMethod(SsaConverter.java:50) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:99) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:73) at com.android.dx.dex.cf.CfTranslator.processMethods(CfTranslator.java:273) at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:134) at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:87) at com.android.dx.command.dexer.Main.processClass(Main.java:487) at com.android.dx.command.dexer.Main.processFileBytes(Main.java:459) at com.android.dx.command.dexer.Main.access$400(Main.java:67) at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:398) at com.android.dx.cf.direct.ClassPathOpener.processArchive(ClassPathOpener.java:245) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:131) at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:109) at com.android.dx.command.dexer.Main.processOne(Main.java:422) at com.android.dx.command.dexer.Main.processAllFiles(Main.java:333) at com.android.dx.command.dexer.Main.run(Main.java:209) at com.android.dx.command.dexer.Main.main(Main.java:174) at com.android.dx.command.Main.main(Main.java:91) Other projects build as expected. I think I need to increase the heap size for this java build step? But how?

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • PHP DOMDocument getElementsByTagname??

    - by FFish
    This is driving me bonkers... I just want to add another img node. $xml = <<<XML <?xml version="1.0" encoding="UTF-8"?> <gallery> <album tnPath="tn/" lgPath="imm/" fsPath="iml/" > <img src="004.jpg" caption="4th caption" /> <img src="005.jpg" caption="5th caption" /> <img src="006.jpg" caption="6th caption" /> </album> </gallery> XML; $xmlDoc = new DOMDocument(); $xmlDoc->loadXML($xml); $album = $xmlDoc->getElementsByTagname('album')[0]; // Parse error: syntax error, unexpected '[' in /Applications/XAMPP/xamppfiles/htdocs/admin/tests/DOMDoc.php on line 17 $album = $xmlDoc->getElementsByTagname('album'); // Fatal error: Call to undefined method DOMNodeList::appendChild() in /Applications/XAMPP/xamppfiles/htdocs/admin/tests/DOMDoc.php on line 19 $newImg = $xmlDoc->createElement("img"); $album->appendChild($newImg); print $xmlDoc->saveXML(); Error:

    Read the article

  • Connecting an overloaded PyQT signal using new-style syntax

    - by Claudio
    I am designing a custom widget which is basically a QGroupBox holding a configurable number of QCheckBox buttons, where each one of them should control a particular bit in a bitmask represented by a QBitArray. In order to do that, I added the QCheckBox instances to a QButtonGroup, with each button given an integer ID: def populate(self, num_bits, parent = None): """ Adds check boxes to the GroupBox according to the bitmask size """ self.bitArray.resize(num_bits) layout = QHBoxLayout() for i in range(num_bits): cb = QCheckBox() cb.setText(QString.number(i)) self.buttonGroup.addButton(cb, i) layout.addWidget(cb) self.setLayout(layout) Then, each time a user would click on a checkbox contained in self.buttonGroup, I'd like self.bitArray to be notified so I can set/unset the corresponding bit in the array. For that I intended to connect QButtonGroup's buttonClicked(int) signal to QBitArray's toggleBit(int) method and, to be as pythonic as possible, I wanted to use new-style signals syntax, so I tried this: self.buttonGroup.buttonClicked.connect(self.bitArray.toggleBit) The problem is that buttonClicked is an overloaded signal, so there is also the buttonClicked(QAbstractButton*) signature. In fact, when the program is executing I get this error when I click a check box: The debugged program raised the exception unhandled TypeError "QBitArray.toggleBit(int): argument 1 has unexpected type 'QCheckBox'" which clearly shows the toggleBit method received the buttonClicked(QAbstractButton*) signal instead of the buttonClicked(int) one. So, the question is, how can we specify, using new-style syntax, that self.buttonGroup emits the buttonClicked(int) signal instead of the default overload - buttonClicked(QAbstractButton*)?

    Read the article

  • Facebook graph API post to user's wall

    - by Lance
    I'm using the FB graph api to post content to the user's wall. I orginally tried using this method: $wall_post = array(array('message' => 'predicted the', 'name' => 'predicted the'), array('message' => $winning_team, 'name' => $winning_team, 'link' => 'http://www.sportannica.com/teams.php?team='.$winning_team.'&amp;year=2012'), array('message' => 'to beat the', 'name' => 'to beat the',), array('message' => $losing_team, 'name' => $losing_team, 'link' => 'http://www.sportannica.com/teams.php?team='.$losing_team.'&amp;year=2012'), array('message' => 'on '.$game_date.'', 'name' => 'on '.$game_date.''), array('picture' => 'http://www.sportannica.com/img/team_icons/current_season_logos/large/'.$winning_team.'.png')); $res = $facebook->api('/me/feed/', 'post', '$wall_post'); But, much to my surprise, you can't post multiple links to a users wall. So, now I'm using the graph api to post content to a user's wall much like the way spotify does. So, now I've figured out that I need to create custom actions and objects with the open graph dashboard. So, I've created the "predict" action and gave it permission to edit the object "game." So, now I have the code: $facebook = new Facebook(array( 'appId' => 'appID', 'secret' => 'SECRET', 'cookie' => true )); $access_token = $facebook->getAccessToken(); $user = $facebook->getUser(); if($user != 0) { curl -F 'access_token='$.access_token.'' \ -F 'away_team=New York Yankees' \ -F 'home_team=New York Mets' \ -F 'match=http://samples.ogp.me/413385652011237' \ 'https://graph.facebook.com/me/predict-edit-add:predict' } I keep getting an error reading: Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING Any ideas?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • C++ Problems with #import of .NET out-of-proc server.

    - by jm
    In C++ program, I am trying to #import TLB of .NET out of proc server. I get errors like: z:\server.tlh(111) : error C2146: syntax error : missing ';' before identifier 'GetType' z:\server.tlh(111) : error C2501: 'TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : error C2143: syntax error : missing ';' before 'tag::id' z:\server.tli(74) : error C2433: 'TypePtr' : 'inline' not permitted on data declarations z:\server.tli(74) : error C2501: '_TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : fatal error C1004: unexpected end of file found The TLH looks like: ... _bstr_t GetToString ( ); VARIANT_BOOL Equals ( const _variant_t & obj ); long GetHashCode ( ); _TypePtr GetType ( ); long Open ( ); ... I am not really interested in the having the base object .NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could #import MSCORLIB.TLB (or put it in path), but I can't get that to compile either. Any tips?

    Read the article

  • Noob boost::bind member function callback question

    - by shaz
    #include <boost/bind.hpp> #include <iostream> using namespace std; using boost::bind; class A { public: void print(string &s) { cout << s.c_str() << endl; } }; typedef void (*callback)(); class B { public: void set_callback(callback cb) { m_cb = cb; } void do_callback() { m_cb(); } private: callback m_cb; }; void main() { A a; B b; string s("message"); b.set_callback(bind(A::print, &a, s)); b.do_callback(); } So what I'm trying to do is to have the print method of A stream "message" to cout when b's callback is activated. I'm getting an unexpected number of arguments error from msvc10. I'm sure this is super noob basic and I'm sorry in advance.

    Read the article

  • Sending JSON to a server

    - by SK9
    I'm running the following Java, an HttpURLConnection PUT request with JSON data that will be sent from an Android device. I'll handle any raised exceptions after this is working. Authenticator.setDefault(new Authenticator() { protected PasswordAuthentication getPasswordAuthentication() { return new PasswordAuthentication(nameString, pwdString.toCharArray()); } }); url = new URL(myURLString); HttpURLConnection urlConnection = (HttpURLConnection) url.openConnection(); urlConnection.setDoOutput(true); urlConnection.setChunkedStreamingMode(0); urlConnection.setRequestMethod("PUT"); urlConnection.setRequestProperty("Content-Type", "application/json"); OutputStream output = null; try { output = urlConnection.getOutputStream(); output.write(jsonArray.toString().getBytes()); } finally { if (output != null) { output.close(); } } int status = ((HttpURLConnection) urlConnection).getResponseCode(); System.out.println("" + status); urlConnection.disconnect(); I'm receiving an HTTP 500 error (internal error code), that an unexpected property is blocking the request. The JSONArray comprises JSONObjects whose keys I know are correct. The server is pretty standard, and expects HTTP PUTs with JSON bodies. Am I missing something glaring? Thanking you kindly in advance.

    Read the article

  • Service reference error when moving dev. environment from XP to W7

    - by Peter
    Hi, I am building an application and I am using web services for getting data from a server. It was working fine when I was developing on my XP machine but had to switch to Windows 7. On the new machine I grabbed the latest version of the code using sourcesafe. However, when I try to add a service reference in the solution or update an existing one I get the following error: There was an error downloading 'http://localhost:52490/Service/CustomerService.asmx'. The request failed with the error message: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Could not create type 'Digital_Server.CustomerService'. Source Error <%@ WebService Language="vb" CodeBehind="CustomerService.asmx.vb" Class="Digital_Server.CustomerService" % Source File: /Service/CustomerService.asmxLine: 1 Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927 --. Metadata contains a reference that cannot be resolved: 'http://localhost:52490/Service/CustomerService.asmx'. An error occurred while receiving the HTTP response to http://localhost:52490/Service/CustomerService.asmx. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details. The underlying connection was closed: An unexpected error occurred on a receive. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. An existing connection was forcibly closed by the remote host If the service is defined in the current solution, try building the solution and adding the service reference again. Does it has anything to do with the IIS or is it any configuration file I have to change in the solution? Any help is appreciated.

    Read the article

  • Downloading Spreadsheets From Google Docs

    - by jeremynealbrown
    Hello, I am working on an Android app that uses the gdata-java-client to download documents for display only. So far I have an application that authenticates with the services and displays a list of user documents. When the user selects a document another query is made for the documents itself. A request for txt, html, rtf and doc files works well, however when I request a spreadsheet in either .csv or .xsl format the result is unexpected. I'm using an HTTPResponse object to store the result of a an HTTPRequest. When I request a document in .csv or .xsl format the HTTPResponse.parseAsString() method produces an entire html page which appears to be the Google Docs home page. Sounds strange, but the result is the actual html for the login page. The HTTPResponse.getStatusMessage returns a 200. Seems like I am missing something simple here. Is there another property of the HTTPResponse that contains the .csv data? I am pretty sure that I am using the correct uri for downloading spreadsheets because it works when I download through my browser. In any case here is an example uri: https://spreadsheets.google.com/feeds/download/spreadsheets/Export?key=0AsE_6_YIr797dHBTUWlHMUFXeTV4ZzJlUGxWRnJXanc&exportFormat=csv Thanks in advance for any help :)

    Read the article

  • What is wrong with the JavaScript event handling in this example? (Using click() and hover() jQuery

    - by Bungle
    I'm working on a sort of proof-of-concept for a project that approximates Firebug's inspector tool. For more details, please see this related question. Here is the example page. I've only tested it in Firefox: http://troy.onespot.com/static/highlight.html The idea is that, when you're mousing over any element that can contain text, it should "highlight" with a light gray background to indicate the boundaries of that element. When you then click on the element, it should alert() a CSS selector that matches it. This is somewhat working in the example linked above. However, there's one fundamental problem. When mousing over from the top of the page to the bottom, it will pick up the paragraphs, <h1> element, etc. But, it doesn't get the <div>s that encompass those paragraphs. However, for example, if you "sneak up" on the <div> that contains the two paragraphs "The area was settled..." and "Austin was selected..." from the left - tracing down the left edge of the page and entering the <div> just between the two paragraphs (see this screenshot) - then it is picked up. I assume this has something to do with the fact that I haven't attached an event handler to the <body> element (where you're entering the <div> from if you enter from the left), but I have attached handlers to the <p>s (where you're entering from if you come from the top or bottom). There are also other issues with mousing in and out elements - background colors that "stick" and the like - that I think are also related. As indicated in the related question posted above, I suspect there is something about event bubbling that I don't understand that is causing unexpected behavior. Can anyone spot what's wrong with my code? Thanks in advance for any help!

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Pairs from single list

    - by Apalala
    Often enough, I've found the need to process a list by pairs. I was wondering which would be the pythonic and efficient way to do it, and found this on Google: pairs = zip(t[::2], t[1::2]) I thought that was pythonic enough, but after a recent discussion involving idioms versus efficiency, I decided to do some tests: import time from itertools import islice, izip def pairs_1(t): return zip(t[::2], t[1::2]) def pairs_2(t): return izip(t[::2], t[1::2]) def pairs_3(t): return izip(islice(t,None,None,2), islice(t,1,None,2)) A = range(10000) B = xrange(len(A)) def pairs_4(t): # ignore value of t! t = B return izip(islice(t,None,None,2), islice(t,1,None,2)) for f in pairs_1, pairs_2, pairs_3, pairs_4: # time the pairing s = time.time() for i in range(1000): p = f(A) t1 = time.time() - s # time using the pairs s = time.time() for i in range(1000): p = f(A) for a, b in p: pass t2 = time.time() - s print t1, t2, t2-t1 These were the results on my computer: 1.48668909073 2.63187503815 1.14518594742 0.105381965637 1.35109519958 1.24571323395 0.00257992744446 1.46182489395 1.45924496651 0.00251388549805 1.70076990128 1.69825601578 If I'm interpreting them correctly, that should mean that the implementation of lists, list indexing, and list slicing in Python is very efficient. It's a result both comforting and unexpected. Is there another, "better" way of traversing a list in pairs? Note that if the list has an odd number of elements then the last one will not be in any of the pairs. Which would be the right way to ensure that all elements are included? I added these two suggestions from the answers to the tests: def pairwise(t): it = iter(t) return izip(it, it) def chunkwise(t, size=2): it = iter(t) return izip(*[it]*size) These are the results: 0.00159502029419 1.25745987892 1.25586485863 0.00222492218018 1.23795199394 1.23572707176 Results so far Most pythonic and very efficient: pairs = izip(t[::2], t[1::2]) Most efficient and very pythonic: pairs = izip(*[iter(t)]*2) It took me a moment to grok that the first answer uses two iterators while the second uses a single one. To deal with sequences with an odd number of elements, the suggestion has been to augment the original sequence adding one element (None) that gets paired with the previous last element, something that can be achieved with itertools.izip_longest().

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >