Search Results

Search found 3304 results on 133 pages for 'soul trace'.

Page 5/133 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • trace server load? how to

    - by Clear.Cache
    My server (Centos 4.8) keeps shooting up in load. Its a shared hosting server with Cpanel/WHM. How can I trace the process/user causing this ongoing problem? I use top, but it shows nothing that stands out for a particular script from a particular user. I have suPHP enabled with PHP 5.3x and MySQL 5 as well.

    Read the article

  • Broken Motherboard Trace

    - by CoffeeBean
    When you're fairly certain that a motherboard trace is broken near the CPU (i.e.: you suspect heatsink / fan was improperly inserted) is it advisable to attempt a repair? I've heard that it can be done using a substance for repairing embedded windshield defoggers. Has anyone had any experience with this?

    Read the article

  • Trace Server Load: Simple "How To"?

    - by Clear.Cache
    My Server Specs: - Centos 4.8 32 bit - Cpanel / WHM - suPHP enabled on PHP 5.3x - MySQL 5x I need someone to please show me various ways to trace what is causing server load spikes. Sometimes I see so many "nobody" users running httpd processes, but I dont' know how to determine what user(s) it might be, even though suPHP is enabled.

    Read the article

  • Trace() method doesnt work in FlashDevelop

    - by numerical25
    When I put a trace("test"); at the entry point of my flashdevelop project and run it. The application runs fine but I do not see the trace in the output. Below is my code package { import flash.display.Sprite; import flash.events.Event; /** * ... * @author Anthony Gordon */ public class Main extends Sprite { public function Main():void { if (stage) init(); else addEventListener(Event.ADDED_TO_STAGE, init); } private function init(e:Event = null):void { trace("test"); removeEventListener(Event.ADDED_TO_STAGE, init); // entry point var game:Game = new Game(stage); addChild(game); } }

    Read the article

  • Stack trace with incorrect line number

    - by adrianbanks
    Why would a stack trace show "line 0", but only for one frame in the stack trace? eg. ... at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader() at My.LibraryA.Some.Method():line 16 at My.LibraryB.Some.OtherMethod():line 0 at My.LibraryB.Some.Method():line 22 at My.LibraryA.Some.Method():line 10 Background: I have an application that is failing with an exception, and is logging a stack trace to its log file. When the applciation was built, all assemblies were compiled with full debug info (Project Properties - Build - Advanced - Debug Info - Full), and so PDB files were generated. To help me diagnose where the error is coming from, I dropped the PDB files into the application's bin directory, and reproduced the exception. All line numbers for each stack frame look correct, with the exception of one which displays "line 0" as its source.

    Read the article

  • Can’t dup NilClass - how to trace to offender

    - by fullware
    This exception occurs often and intermittently when in development mode, and appears to get triggered by model associations. Does There are lots of references found by google but none seem to help to trace the problem to an offending class. Does anyone have any insight into how to trace the occurrence of this exception to it's cause? I've seen the posts on adding "unloadable" but I'm not sure I buy it--unless there's a way to trace it somehow to its cause. I'm not in favor of indiscriminately adding such things to every class in hopes the problem might go away. Rails 2.3.5.

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • kernel panic with exitcode=0x00000004 and no call trace

    - by litmusconfig
    A bit of background first - I'm trying to configure a MicroBlaze Linux (big-endian version) system on a Xilinx ML506 eval board. The goal is to use the second partition of a CompactFlash card attached to the Xilinx SystemACE controller. So far, root in initramfs works and after boot, I can mount and use said partition, no problem. But if I try to use it right from the getgo with the "root=/dev/xsa2" kernel command line parameter, the system hangs with [...] Freeing unused kernel memory: 143k freed Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004 And that's it - no regdump, no call trace, no further nothing from the serial console, even though kernel has been configured with debugging enabled. Now, I'm pretty new at this, so is there something else I should be doing to see something more informative from the kernel?

    Read the article

  • ICMP - TTL - Trace Route

    - by dbasnett
    I asked this question at Stack Overflow and then thought this may be the better place to ask. Given the following situation: PC --- |aa RTR1 bb| --- |aa RTR2 bb| --- |aa RTR3 bb| etc Each of the |aa rtr bb| is meant to be a router with two ports aa and bb. My question is this. When you do a trace route from PC which router port address should respond with time to live exceeded in transit message? I seem to remember being taught to think of the router as being in as many parts as ports, so that in my scenario when aa is forwarding the packet to bb and decrements the ttl to 0, it will be the address of the aa port in the failure message. I am trying to find the definitive answer. Thanks.

    Read the article

  • Ping / Trace Route

    - by dbasnett
    More and more I see programmers developing web based applications that are based on what I call "ping-then-do" mentality. An example would be the application that pings the mail server before sending the mail. In a rather heated debate in another forum I made this statement, "If you are going to write programs that use the internet, you should at least have a basic idea of the fundamentals. The desire to "ping-then-do" tells me that many who are, don’t." On this forum and at Stack Overflow I see numerous questions about ping / trace route and wonder why? If it is acceptable to have a discussion about it here I would like to hear what others think. If not I assume it will be closed rapidly.

    Read the article

  • VPN trace route

    - by Jake
    I am inside an Active Directory (AD) domain and trying to trace route to another AD domain at a remote site, but supposedly connected by VPN in between. the local domain can be accessed at 192.168.3.x and the remote location 192.168.2.x. When I do a tracert, I am suprised to see that the results did not show the intermediate ISP nodes. If I used the public IP of the remote location, then a normal tracert going through every intermediate node would show. 1 <1 ms <1 ms <1 ms 192.168.3.1 2 1 ms <1 ms <1 ms 192.168.3.254 3 7 ms 7 ms 7 ms 212.31.2xx.xx 4 197 ms 201 ms 196 ms 62.6.1.2xx 5 201 ms 201 ms 210 ms vacc27.norwich.vpn-acc.bt.net [62.6.192.87] 6 209 ms 209 ms 209 ms 81.146.xxx.xx 7 209 ms 209 ms 209 ms COMPANYDOMAIN [192.168.2.6] Can someone explain how does this VPN tunnelling works? Does this mean VPN is technically faster than without?

    Read the article

  • Trace redirect loop

    - by Michel Krämer
    I have a large PHP application. After I changed some settings I get a redirection loop (i.e. the browser is redirected to the same page over and over again). The problem is that I don't know which command (which line in which PHP file) in this application causes the redirect. Is there a way to trace calls to the header() function? Or - even better - is there a way to trace redirects in PHP? Thanks in advance, Michel

    Read the article

  • Trace/BPT trap when running feedparser inside a Thread object

    - by simao
    Hello, I am trying to run a Thread to parse a list of links using the universal feed parser, but when I start the thread I get a Trace/BPT trap. Here's the code I am using: class parseRssFiles(Thread): def __init__ (self,rssLinks): Thread.__init__(self) self.rssLinks = rssLinks def run(self): self.rssContents = [ feedparser.parse(link) for link in rssLinks] Is there any other way to do this? Link to the report generated by Mac OS X 10.6.2: http://simaom.com/trace.txt Thanks

    Read the article

  • javascript AOP statement trace

    - by Paul
    Some javascript libraries, such as JQuery and Dolo, provide AOP APIs that can trace a function. Just wondering whether there is any javascript AOP libraries can trace an individual statement?

    Read the article

  • trace an asp.net website in production

    - by uno
    Is there a way that I can trace every method, basically a line trace, in an asp.net web site in production environment? I don't want to go about creating db logging for every line - i see an intermittent error and would like to see every line called and performed by the website per user.

    Read the article

  • Stack Trace of cross-thread exceptions with Invoke

    - by the_lotus
    When an exception happens after calling Invoke, .NET shows the stack trace as if the error happens while calling Invoke. Example below: .NET will say the error happen in UpdateStuff instead of UpdateStuff - BadFunction Is there a way to catch the "real" exception and show the correct stack trace? Private Sub UpdateStuff() If (Me.InvokeRequired) Then Me.Invoke(New UpdateStuffDelegate(AddressOf UpdateStuff)) Return End If Badfunction() End Sub Private Sub BadFunction() Dim o As Object o.ToString() End Sub

    Read the article

  • PDB: exception when in console - full stack trace

    - by EoghanM
    When at the pdb console, entering a statement which causes an exception results in just a single line stack trace, e.g. (Pdb) someFunc() *** TypeError: __init__() takes exactly 2 arguments (1 given) However I'd like to figure out where exactly in someFunc the error originates. i.e. in this case, which class __init__ is attached to. Is there a way to get a full stack trace in Pdb?

    Read the article

  • Call trace in Android

    - by DenMark
    I want to know how to do method tracing for Android applications. I mean, a sequence of calls on each object, not a stack trace. It's very similar to this question (Call trace in java), but on different platforms (jvm-PC vs dvm-Android). I have no control over the start arguments of dalvik, thus I cannot specify a java agent (or am I wrong here?). Is there another way to do method tracing? Thanks!

    Read the article

  • trace an asp.net website in production - c#/asp.net

    - by uno
    Is there a way that I can trace every method, basically a line trace, in an asp.net web site in production environment? I dont want to go about creating db logging for every line - i see an intermittent error and would like to see every line called and performed by the website per user.

    Read the article

  • Get crashing application stack trace

    - by Tony
    Is there any program that anyone knows off (not a debugger) that will produce a stack trace of a crashing application? The application crash can be simulated at will on a server on which I cannot necessarily install a debugger. That's why the question if there's no other way to get a stack trace so I can then have a look.

    Read the article

  • trace() not working. Flash

    - by Nitesh Panchal
    Hello, I chose new actionscript file(3.0) and wrote as simple as trace("Hello World");, but it is not working. I have flash player 10 and i also made sure i have not checked omit trace statements in publish settings. Where am i going wrong? Please help.

    Read the article

  • How to get stack trace information for logging in production when using the GAC

    - by Jonathan Parker
    I would like to get stack trace (file name and line number) information for logging exceptions etc. in a production environment. The DLLs are installed in the GAC. Is there any way to do this? This article says about putting PDB files in the GAC: You can spot these easily because they will say you need to copy the debug symbols (.pdb file) to the GAC. In and of itself, that will not work. I know this article refers to debugging with VS but I thought it might apply to logging the stacktrace also. I've followed the instructions for the answer to this question except for unchecking Optimize code which they said was optional. I copied the dlls and pdbs into the GAC but I'm still not getting the stack trace information. Here's what I get in the log file for the stack trace: OnAuthenticate at offset 161 in file:line:column <filename unknown>:0:0 ValidateUser at offset 427 in file:line:column <filename unknown>:0:0 LogException at offset 218 in file:line:column <filename unknown>:0:0 I'm using NLog. My NLog layout is: layout="${date:format=s}|${level}|${callsite}|${identity}|${message}|${stacktrace:format=Raw}" ${stacktrace:format=Raw} being the relevant part.

    Read the article

  • Trace PRISM / CAL events (best practice?)

    - by Christian
    Ok, this question is for people with either a deep knowledge of PRISM or some magic skills I just lack (yet). The Background is simple: Prism allows the declaration of events to which the user can subscribe or publish. In code this looks like this: _eventAggregator.GetEvent<LayoutChangedEvent>().Subscribe(UpdateUi, true); _eventAggregator.GetEvent<LayoutChangedEvent>().Publish("Some argument"); Now this is nice, especially because these events are strongly typed, and the declaration is a piece of cake: public class LayoutChangedEvent : CompositePresentationEvent<string> { } But now comes the hard part: I want to trace events in some way. I had the idea to subscribe using a lambda expression calling a simple log message. Worked perfectly in WPF, but in Silverlight there is some method access error (took me some time to figure out the reason).. If you want to see for yourself, try this in Silverlight: eA.GetEvent<VideoStartedEvent>().Subscribe(obj => TraceEvent(obj, "vSe", log)); If this would be possible, I would be happy, because I could easily trace all events using a single line to subscribe. But it does not... The alternative approach is writing a different functions for each event, and assign this function to the events. Why different functions? Well, I need to know WHICH event was published. If I use the same function for two different events I only get the payload as argument. I have now way to figure out which event caused the tracing message. I tried: using Reflection to get the causing event (not working) using a constructor in the event to enable each event to trace itself (not allowed) Any other ideas? Chris PS: Writing this text took me most likely longer than writing 20 functions for my 20 events, but I refuse to give up :-) I just had the idea to use postsharp, that would most likely work (although I am not sure, perhaps I end up having only information about the base class).. Tricky and so unimportant topic...

    Read the article

  • Missing line number in stack trace eventhough the PDB files are included

    - by Farzad
    This is running me nuts. I have this web service implemented w/ C# using VS 2008. I publish it on IIS. I have modified the release build so the pdb files are copied along with the dlls into the target directory on inetpub. Also web.config file has debug=true. Then I call a web service that throws an exception. The stack trace does not contain the line numbers. I have no idea what I am missing here, any ideas? Additional Info: If I run the web app using VS built-in web server, it works and I get line numbers in stack trace. But if I copy the same files (pdb and dll) that the VS built-in web server is using to IIS, still the line numbers are missing in stack trace. It seems that there is something related to the IIS that ignores the pdb files! Update When I publish to IIS, all the pdb files are published under the bin directory and everything looks fine. But when I go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" under the specific directory related to my project, I can see that the assembly (.dll) files are all there, but there is no pdb files. But this does not happen if I run the project using VS built-in web server. So if I copy the pdb files manually to the temp folder, I can see the line numbers. Any idea why the pdb files are not copied to the temp folder? BTW, when I attach to the worker process I can see that it says Symbols loaded!

    Read the article

  • Android - Calling getJSONArray throwing JSONException with no stack trace

    - by Agathron
    Hi all, I'm currently working on an android app that pulls a list of forums from a JSON feed. I'm trying to parse the feed and immediately upon calling getJSONArray a JSON exception is being thrown with no stack trace. The JSON being returned is stored in an JSONObject jobj with the format as follows: { "Forum": [ {"ForumName":"CEC Employee Communications Forum","ForumId":"105"}, {"ForumName":"CEC External Stakeholder Relations Forum","ForumId":"109"}, {"ForumName":"See All...","ForumId":"0"} ] } However when running the following code, I get an immediate exception without a stack trace: JSONArray jarray = new JSONArray(); jarray = jobj.getJSONArray("Forum"); Running jobj.GetJSONArray("Forum").toString(); returns what looks to be a correct array of the format: [ {"ForumName":"CEC Employee Communications Forum","ForumId":"105"}, {"ForumName":"CEC External Stakeholder Relations Forum","ForumId":"109"}, {"ForumName":"See All...","ForumId":"0"} ] I also tried JSONArray jarray = new JSONArray(jobj.GetJSONArray("Forum").toString()); and had the exception thrown immediately. Any help would be greatly appreciated. Thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >