Search Results

Search found 85833 results on 3434 pages for 'general log file'.

Page 112/3434 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Java EE 6 and NoSQL/MongoDB on GlassFish using JPA and EclipseLink 2.4 (TOTD #175)

    - by arungupta
    TOTD #166 explained how to use MongoDB in your Java EE 6 applications. The code in that tip used the APIs exposed by the MongoDB Java driver and so requires you to learn a new API. However if you are building Java EE 6 applications then you are already familiar with Java Persistence API (JPA). Eclipse Link 2.4, scheduled to release as part of Eclipse Juno, provides support for NoSQL databases by mapping a JPA entity to a document. Their wiki provides complete explanation of how the mapping is done. This Tip Of The Day (TOTD) will show how you can leverage that support in your Java EE 6 applications deployed on GlassFish 3.1.2. Before we dig into the code, here are the key concepts ... A POJO is mapped to a NoSQL data source using @NoSQL or <no-sql> element in "persistence.xml". A subset of JPQL and Criteria query are supported, based upon the underlying data store Connection properties are defined in "persistence.xml" Now, lets lets take a look at the code ... Download the latest EclipseLink 2.4 Nightly Bundle. There is a Installer, Source, and Bundle - make sure to download the Bundle link (20120410) and unzip. Download GlassFish 3.1.2 zip and unzip. Install the Eclipse Link 2.4 JARs in GlassFish Remove the following JARs from "glassfish/modules": org.eclipse.persistence.antlr.jar org.eclipse.persistence.asm.jar org.eclipse.persistence.core.jar org.eclipse.persistence.jpa.jar org.eclipse.persistence.jpa.modelgen.jar org.eclipse.persistence.moxy.jar org.eclipse.persistence.oracle.jar Add the following JARs from Eclipse Link 2.4 nightly build to "glassfish/modules": org.eclipse.persistence.antlr_3.2.0.v201107111232.jar org.eclipse.persistence.asm_3.3.1.v201107111215.jar org.eclipse.persistence.core.jpql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.core_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa.jpql_2.0.0.v20120407-r11132.jar org.eclipse.persistence.jpa.modelgen_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa_2.4.0.v20120407-r11132.jar org.eclipse.persistence.moxy_2.4.0.v20120407-r11132.jar org.eclipse.persistence.nosql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.oracle_2.4.0.v20120407-r11132.jar Start MongoDB Download latest MongoDB from here (2.0.4 as of this writing). Create the default data directory for MongoDB as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db Refer to Quickstart for more details. Start MongoDB as: arungup-mac:mongodb-osx-x86_64-2.0.4 <arungup> ->./bin/mongod./bin/mongod --help for help and startup optionsMon Apr  9 12:56:02 [initandlisten] MongoDB starting : pid=3124 port=27017 dbpath=/data/db/ 64-bit host=arungup-mac.localMon Apr  9 12:56:02 [initandlisten] db version v2.0.4, pdfile version 4.5Mon Apr  9 12:56:02 [initandlisten] git version: 329f3c47fe8136c03392c8f0e548506cb21f8ebfMon Apr  9 12:56:02 [initandlisten] build info: Darwin erh2.10gen.cc 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40Mon Apr  9 12:56:02 [initandlisten] options: {}Mon Apr  9 12:56:02 [initandlisten] journal dir=/data/db/journalMon Apr  9 12:56:02 [initandlisten] recover : no journal files present, no recovery neededMon Apr  9 12:56:02 [websvr] admin web console waiting for connections on port 28017Mon Apr  9 12:56:02 [initandlisten] waiting for connections on port 27017 Check out the JPA/NoSQL sample from SVN repository. The complete source code built in this TOTD can be downloaded here. Create Java EE 6 web app Create a Java EE 6 Maven web app as: mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=model -DartifactId=javaee-nosql -DarchetypeVersion=1.5 -DinteractiveMode=false Copy the model files from the checked out workspace to the generated project as: cd javaee-nosqlcp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/model src/main/java Copy "persistence.xml" mkdir src/main/resources cp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/META-INF ./src/main/resources Add the following dependencies: <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.jpa</artifactId> <version>2.4.0-SNAPSHOT</version> <scope>provided</scope></dependency><dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.nosql</artifactId> <version>2.4.0-SNAPSHOT</version></dependency><dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.7.3</version></dependency> The first one is for the EclipseLink latest APIs, the second one is for EclipseLink/NoSQL support, and the last one is the MongoDB Java driver. And the following repository: <repositories> <repository> <id>EclipseLink Repo</id> <url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url> <snapshots> <enabled>true</enabled> </snapshots> </repository>  </repositories> Copy the "Test.java" to the generated project: mkdir src/main/java/examplecp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/example/Test.java ./src/main/java/example/ This file contains the source code to CRUD the JPA entity to MongoDB. This sample is explained in detail on EclipseLink wiki. Create a new Servlet in "example" directory as: package example;import java.io.IOException;import java.io.PrintWriter;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;/** * @author Arun Gupta */@WebServlet(name = "TestServlet", urlPatterns = {"/TestServlet"})public class TestServlet extends HttpServlet { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet TestServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet TestServlet at " + request.getContextPath() + "</h1>"); try { Test.main(null); } catch (Exception ex) { ex.printStackTrace(); } out.println("</body>"); out.println("</html>"); } finally { out.close(); } } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); }} Build the project and deploy it as: mvn clean packageglassfish3/bin/asadmin deploy --force=true target/javaee-nosql-1.0-SNAPSHOT.war Accessing http://localhost:8080/javaee-nosql/TestServlet shows the following messages in the server.log: connecting(EISLogin( platform=> MongoPlatform user name=> "" MongoConnectionSpec())) . . .Connected: User: Database: 2.7  Version: 2.7 . . .Executing MappedInteraction() spec => null properties => {mongo.collection=CUSTOMER, mongo.operation=INSERT} input => [DatabaseRecord( CUSTOMER._id => 4F848E2BDA0670307E2A8FA4 CUSTOMER.NAME => AMCE)]. . .Data access result: [{TOTALCOST=757.0, ORDERLINES=[{DESCRIPTION=table, LINENUMBER=1, COST=300.0}, {DESCRIPTION=balls, LINENUMBER=2, COST=5.0}, {DESCRIPTION=rackets, LINENUMBER=3, COST=15.0}, {DESCRIPTION=net, LINENUMBER=4, COST=2.0}, {DESCRIPTION=shipping, LINENUMBER=5, COST=80.0}, {DESCRIPTION=handling, LINENUMBER=6, COST=55.0},{DESCRIPTION=tax, LINENUMBER=7, COST=300.0}], SHIPPINGADDRESS=[{POSTALCODE=L5J1H7, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa,STREET=17 Jane St.}], VERSION=2, _id=4F848E2BDA0670307E2A8FA8,DESCRIPTION=Pingpong table, CUSTOMER__id=4F848E2BDA0670307E2A8FA7, BILLINGADDRESS=[{POSTALCODE=L5J1H8, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa, STREET=7 Bank St.}]}] You'll not see any output in the browser, just the output in the console. But the code can be easily modified to do so. Once again, the complete Maven project can be downloaded here. Do you want to try accessing relational and non-relational (aka NoSQL) databases in the same PU ?

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • WebSocket Applications using Java: JSR 356 Early Draft Now Available (TOTD #183)

    - by arungupta
    WebSocket provide a full-duplex and bi-directional communication protocol over a single TCP connection. JSR 356 is defining a standard API for creating WebSocket applications in the Java EE 7 Platform. This Tip Of The Day (TOTD) will provide an introduction to WebSocket and how the JSR is evolving to support the programming model. First, a little primer on WebSocket! WebSocket is a combination of IETF RFC 6455 Protocol and W3C JavaScript API (still a Candidate Recommendation). The protocol defines an opening handshake and data transfer. The API enables Web pages to use the WebSocket protocol for two-way communication with the remote host. Unlike HTTP, there is no need to create a new TCP connection and send a chock-full of headers for every message exchange between client and server. The WebSocket protocol defines basic message framing, layered over TCP. Once the initial handshake happens using HTTP Upgrade, the client and server can send messages to each other, independent from the other. There are no pre-defined message exchange patterns of request/response or one-way between client and and server. These need to be explicitly defined over the basic protocol. The communication between client and server is pretty symmetric but there are two differences: A client initiates a connection to a server that is listening for a WebSocket request. A client connects to one server using a URI. A server may listen to requests from multiple clients on the same URI. Other than these two difference, the client and server behave symmetrically after the opening handshake. In that sense, they are considered as "peers". After a successful handshake, clients and servers transfer data back and forth in conceptual units referred as "messages". On the wire, a message is composed of one or more frames. Application frames carry payload intended for the application and can be text or binary data. Control frames carry data intended for protocol-level signaling. Now lets talk about the JSR! The Java API for WebSocket is worked upon as JSR 356 in the Java Community Process. This will define a standard API for building WebSocket applications. This JSR will provide support for: Creating WebSocket Java components to handle bi-directional WebSocket conversations Initiating and intercepting WebSocket events Creation and consumption of WebSocket text and binary messages The ability to define WebSocket protocols and content models for an application Configuration and management of WebSocket sessions, like timeouts, retries, cookies, connection pooling Specification of how WebSocket application will work within the Java EE security model Tyrus is the Reference Implementation for JSR 356 and is already integrated in GlassFish 4.0 Promoted Builds. And finally some code! The API allows to create WebSocket endpoints using annotations and interface. This TOTD will show a simple sample using annotations. A subsequent blog will show more advanced samples. A POJO can be converted to a WebSocket endpoint by specifying @WebSocketEndpoint and @WebSocketMessage. @WebSocketEndpoint(path="/hello")public class HelloBean {     @WebSocketMessage    public String sayHello(String name) {         return "Hello " + name + "!";     }} @WebSocketEndpoint marks this class as a WebSocket endpoint listening at URI defined by the path attribute. The @WebSocketMessage identifies the method that will receive the incoming WebSocket message. This first method parameter is injected with payload of the incoming message. In this case it is assumed that the payload is text-based. It can also be of the type byte[] in case the payload is binary. A custom object may be specified if decoders attribute is specified in the @WebSocketEndpoint. This attribute will provide a list of classes that define how a custom object can be decoded. This method can also take an optional Session parameter. This is injected by the runtime and capture a conversation between two endpoints. The return type of the method can be String, byte[] or a custom object. The encoders attribute on @WebSocketEndpoint need to define how a custom object can be encoded. The client side is an index.jsp with embedded JavaScript. The JSP body looks like: <div style="text-align: center;"> <form action="">     <input onclick="say_hello()" value="Say Hello" type="button">         <input id="nameField" name="name" value="WebSocket" type="text"><br>    </form> </div> <div id="output"></div> The code is relatively straight forward. It has an HTML form with a button that invokes say_hello() method and a text field named nameField. A div placeholder is available for displaying the output. Now, lets take a look at some JavaScript code: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/HelloWebSocket/hello";     var websocket = new WebSocket(wsUri);     websocket.onopen = function(evt) { onOpen(evt) };     websocket.onmessage = function(evt) { onMessage(evt) };     websocket.onerror = function(evt) { onError(evt) };     function init() {         output = document.getElementById("output");     }     function say_hello() {      websocket.send(nameField.value);         writeToScreen("SENT: " + nameField.value);     } This application is deployed as "HelloWebSocket.war" (download here) on GlassFish 4.0 promoted build 57. So the WebSocket endpoint is listening at "ws://localhost:8080/HelloWebSocket/hello". A new WebSocket connection is initiated by specifying the URI to connect to. The JavaScript API defines callback methods that are invoked when the connection is opened (onOpen), closed (onClose), error received (onError), or a message from the endpoint is received (onMessage). The client API has several send methods that transmit data over the connection. This particular script sends text data in the say_hello method using nameField's value from the HTML shown earlier. Each click on the button sends the textbox content to the endpoint over a WebSocket connection and receives a response based upon implementation in the sayHello method shown above. How to test this out ? Download the entire source project here or just the WAR file. Download GlassFish4.0 build 57 or later and unzip. Start GlassFish as "asadmin start-domain". Deploy the WAR file as "asadmin deploy HelloWebSocket.war". Access the application at http://localhost:8080/HelloWebSocket/index.jsp. After clicking on "Say Hello" button, the output would look like: Here are some references for you: WebSocket - Protocol and JavaScript API JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API Capturing WebSocket on-the-wire messages

    Read the article

  • A deadlock was detected while trying to lock variables in SSIS

    Error: 0xC001405C at SQL Log Status: A deadlock was detected while trying to lock variables "User::RowCount" for read/write access. A lock cannot be acquired after 16 attempts. The locks timed out. Have you ever considered variable locking when building your SSIS packages? I expect many people haven’t just because most of the time you never see an error like the one above. I’ll try and explain a few key concepts about variable locking and hopefully you never will see that error. First of all, what is all this variable locking all about? Put simply SSIS variables have to be locked before they can be accessed, and then of course unlocked once you have finished with them. This is baked into SSIS, presumably to reduce the risk of race conditions, but with that comes some additional overhead in that you need to be careful to avoid lock conflicts in some scenarios. The most obvious place you will come across any hint of locking (no pun intended) is the Script Task or Script Component with their ReadOnlyVariables and ReadWriteVariables properties. These two properties allow you to enter lists of variables to be used within the task, or to put it another way, these lists of variables to be locked, so that they are available within the task. During the task pre-execute phase the variables and locked, you then use them during the execute phase when you code is run, and then unlocked for you during the post-execute phase. So by entering the variable names in one of the two list, the locking is taken care of for you, and you just read and write to the Dts.Variables collection that is exposed in the task for the purpose. As you can see in the image above, the variable PackageInt is specified, which means when I write the code inside that task I don’t have to worry about locking at all, as shown below. public void Main() { // Set the variable value to something new Dts.Variables["PackageInt"].Value = 199; // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } As you can see as well as accessing the variable, hassle free, I also raise an event. Now consider a scenario where I have an event hander as well as shown below. Now what if my event handler uses tries to use the same variable as well? Well obviously for the point of this post, it fails with the error quoted previously. The reason why is clearly illustrated if you consider the following sequence of events. Package execution starts Script Task in Control Flow starts Script Task in Control Flow locks the PackageInt variable as specified in the ReadWriteVariables property Script Task in Control Flow executes script, and the On Information event is raised The On Information event handler starts Script Task in On Information event handler starts Script Task in On Information event handler attempts to lock the PackageInt variable (for either read or write it doesn’t matter), but will fail because the variable is already locked. The problem is caused by the event handler task trying to use a variable that is already locked by the task in Control Flow. Events are always raised synchronously, therefore the task in Control Flow that is raising the event will not regain control until the event handler has completed, so we really do have un-resolvable locking conflict, better known as a deadlock. In this scenario we can easily resolve the problem by managing the variable locking explicitly in code, so no need to specify anything for the ReadOnlyVariables and ReadWriteVariables properties. public void Main() { // Set the variable value to something new, with explicit lock control Variables lockedVariables = null; Dts.VariableDispenser.LockOneForWrite("PackageInt", ref lockedVariables); lockedVariables["PackageInt"].Value = 199; lockedVariables.Unlock(); // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } Now the package will execute successfully because the variable lock has already been released by the time the event is raised, so no conflict occurs. For those of you with a SQL Engine background this should all sound strangely familiar, and boils down to getting in and out as fast as you can to reduce the risk of lock contention, be that SQL pages or SSIS variables. Unfortunately we cannot always manage the locking ourselves. The Execute SQL Task is very often used in conjunction with variables, either to pass in parameter values or get results out. Either way the task will manage the locking for you, and will fail when it cannot lock the variables it requires. The scenario outlined above is clear cut deadlock scenario, both parties are waiting on each other, so it is un-resolvable. The mechanism used within SSIS isn’t actually that clever, and whilst the message says it is a deadlock, it really just means it tried a few times, and then gave up. The last part of the error message is actually the most accurate in terms of the failure, A lock cannot be acquired after 16 attempts. The locks timed out.  Now this may come across as a recommendation to always manage locking manually in the Script Task or Script Component yourself, but I think that would be an overreaction. It is more of a reminder to be aware that in high concurrency scenarios, especially when sharing variables across multiple objects, locking is important design consideration. Update – Make sure you don’t try and use explicit locking as well as leaving the variable names in the ReadOnlyVariables and ReadWriteVariables lock lists otherwise you’ll get the deadlock error, you cannot lock a variable twice!

    Read the article

  • Visiting the Fire Station in Coromandel

    Hm, I just tried to remember how we actually came up with this cool idea... but it's already too blurred and it doesn't really matter after all. Anyway, if I remember correctly (IIRC), it happened during one of the Linux meetups at Mugg & Bean, Bagatelle where Ajay and I brought our children along and we had a brief conversation about how cool it would be to check out one of the fire stations here in Mauritius. We both thought that it would be a great experience and adventure for the little ones. An idea takes shape And there we go, down the usual routine these... having an idea, checking out the options and discussing who's doing what. Except this time, it was all up to Ajay, and he did a fantastic job. End of August, he told me that he got in touch with one of his friends which actually works as a fire fighter at the station in Coromandel and that there could be an option to come and visit them (soon). A couple of days later - Confirmed! Be there, and in time... What time? Anyway, doesn't really matter... Everything was settled and arranged. I asked the kids on Friday afternoon if they might be interested to see the fire engines and what a fire fighter is doing. Of course, they were all in! Getting up early on Sunday morning isn't really a regular exercise for all of us but everything went smooth and after a short breakfast it was time to leave. Where are we going? Are we there yet? Now, we are in Bambous. Why do you go this way? The kids were so much into it. Absolutely amazing to see their excitement. Are we there yet? Well, we went through the sugar cane fields towards Chebel and then down into the industrial zone at Coromandel. Honestly, I had a clue where the fire station is located but having Google Maps in reach that shouldn't be a problem in case that we might get lost. But my worries were washed away when our children guided us... "There! Over there are the fire engines! We have to turn left, dad." - No comment, the kids were right! As we were there a little bit too early, we parked the car and the kids started to explore the area and outskirts of the fire station. Some minutes later, as if we had placed an order a unit of two cars had to go out for an alarm and the kids could witness them leaving as closely as possible. Sirens on and wow!!! Ladder truck L32 - MAN truck with Rosenbauer built-up and equipment by Metz Taking the tour Ajay arrived shortly after that and guided us finally inside the station to meet with his pal. The three guys were absolutely well-prepared and showed us around in the hall, explaining that there two units out at the moment. But the ladder truck (with max. 32m expandable height) was still around we all got a great insight into the technique and equipment on the vehicle. It was amazing to see all three kids listening to Mambo as give some figures about the truck and how the fire fighters are actually it. The children and 'our' fire fighters of the day had great fun with the various fire engines Absolutely fantastic that the children were allowed to experience this - we had so much fun! Ajay's son brought two of his toy fire engines along, shared them with ours, and they all played very well together. As a parent it was really amazing to see them at such an ease. Enough theory Shortly afterwards the ladder truck was moved outside, got stabilised and ready to go for 'real-life' exercising. With the additional equipment of safety helmets, security belts and so on, we all got a first-hand impression about how it could be as a fire-fighter. Actually, I was totally amazed by the curiousity and excitement of my BWE. She was really into it and asked lots of interesting questions - in general but also technical. And while our fighters were busy with Ajay and family, I gave her some more details and explanations about the truck, the expandable ladder, the safety cage at the top and other equipment available. Safety first! No exceptions and always be prepared for the worst case... Also, the equipped has been checked prior to excuse - This is your life saver... Hooked up and ready to go... ...of course not too high. This is just a demonstration - and 32 meters above ground isn't for everyone. Well, after that it was me that had the asking looks on me, and I finally revealed to the local fire fighters that I was in the auxiliary fire brigade, more precisely in the hazard department, for more than 10 years. So not a professional fire fighter but at least a passionate and educated one as them. Inside the station Our fire fighters really took their time to explain their daily job to kids, provided them access to operation seat on the ladder truck and how the truck cabin is actually equipped with the different radios and so on. It was really a great time. Later on we had a brief tour through the building itself, and again all of our questions were answered. We had great fun and started to joke about bits and pieces. For me it was also very interesting to see the comparison between the fire station here in Mauritius and the ones I have been to back in Germany. Amazing to see them completely captivated in the play - the children had lots of fun! Also, that there are currently ten fire stations all over the island, plus two additional but private ones at the airport and at the harbour. The newest one is actually down in Black River on the west coast because the time from Quatre Bornes takes too long to have any chance of an effective alarm at all. IMHO, a very good decision as time is the most important factor in getting fire incidents under control. After all it was great experience for all of us, especially for the children to see and understand that their toy trucks are only copies of the real thing and that the job of a (professional) fire fighter is very important in our society. Don't forget that those guys run into the danger zone while you're trying to get away from it as much as possible. Another unit just came back from a grass fire - and shortly after they went out again. No time to rest, too much to do! Mauritian Fire Fighters now and (maybe) in the future... Thank you! It was an honour to be around! Thank you to Ajay for organising and arranging this Sunday morning event, and of course of Big Thank You to the three guys that took some time off to have us at the Fire Station in Coromandel and guide us through their daily job! And remember to call 115 in case of emergencies!

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • Do logins by the gdm (or lightdm) user in auth.log mean my system is breached?

    - by Pramanshu
    Please look at this auth.log (from Ubuntu 14.04) I have provided and tell me who this gdm user is and why there are all these unauthenticated logins? I am freaked out; please help! Here's the /var/log/auth.log file: http://paste.ubuntu.com/8120231/ Update: I know now that "gdm" is gnome desktop manager and it's there because of root. But please look at the log there is more and tell me if my system is breached.

    Read the article

  • dynamic multiple instance of swfupload (firefox vs IE)

    - by jean27
    We have this dynamic uploader which creates a new instance of swfupload. I'm a little bit confused with the outputs produced by firefox and ie. Firefox have the same output as chrome, safari and opera. Whenever I clicked a button for adding a new instance, the previous instances of swfupload in firefox refresh while IE don't. I have this debug information: For Firefox: SWF DEBUG OUTPUT IN FIREFOX ---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_0 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512466022 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-1_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_0 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_1 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512476357 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-2_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_1 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_0 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_1_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1SWF DEBUG: StartUpload: First file in queueSWF DEBUG: StartUpload(): No files found in the queue.SWF DEBUG: StartUpload: First file in queueSWF DEBUG: Event: uploadStart : File ID: SWFUpload_1_0SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== for File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_1_0. Bytes: 30218. Total: 30218SWF DEBUG: Event: uploadSuccess: File ID: SWFUpload_1_0 Response Received: true Data: 65-AddClassification.pngSWF DEBUG: Event: uploadComplete : Upload cycle complete. For IE: SWF DEBUG OUTPUT IN IE ---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_0 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512200531 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-1_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: Browse... button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_0 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: Removing Flash functions hooks (this should only run in IE and should prevent memory leaks)SWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_0_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1---SWFUpload Instance Info--- Version: 2.2.0 2009-03-25 Movie Name: SWFUpload_1 Settings: upload_url: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== flash_url: /content/swfupload.swf?preventswfcaching=1272512222093 use_query_string: false requeue_on_error: false http_success: assume_success_timeout: 0 file_post_name: Filedata post_params: [object Object] file_types: .jpg;.gif;.png;.bmp file_types_description: Image Files file_size_limit: 1MB file_upload_limit: 1 file_queue_limit: 1 debug: true prevent_swf_caching: true button_placeholder_id: file-2_swf button_placeholder: Not Set button_image_url: /content/images/blankButton.png button_width: 109 button_height: 22 button_text: Browse... button_text_style: color: #000000; font-size: 16pt; button_text_top_padding: 1 button_text_left_padding: 30 button_action: -110 button_disabled: false custom_settings: [object Object] Event Handlers: swfupload_loaded_handler assigned: true file_dialog_start_handler assigned: true file_queued_handler assigned: true file_queue_error_handler assigned: true upload_start_handler assigned: true upload_progress_handler assigned: true upload_error_handler assigned: true upload_success_handler assigned: true upload_complete_handler assigned: true debug_handler assigned: true SWF DEBUG: SWFUpload Init CompleteSWF DEBUG: SWF DEBUG: ----- SWF DEBUG OUTPUT ---- SWF DEBUG: Build Number: SWFUPLOAD 2.2.0 SWF DEBUG: movieName: SWFUpload_1 SWF DEBUG: Upload URL: /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== SWF DEBUG: File Types String: .jpg;.gif;.png;.bmp SWF DEBUG: Parsed File Types: jpg,gif,png,bmp SWF DEBUG: HTTP Success: 0 SWF DEBUG: File Types Description: Image Files (.jpg;.gif;.png;.bmp) SWF DEBUG: File Size Limit: 1048576 bytes SWF DEBUG: File Upload Limit: 1 SWF DEBUG: File Queue Limit: 1 SWF DEBUG: Post Params: SWF DEBUG: ----- END SWF DEBUG OUTPUT ---- SWF DEBUG: Removing Flash functions hooks (this should only run in IE and should prevent memory leaks)SWF DEBUG: ExternalInterface reinitializedSWF DEBUG: Event: fileDialogStart : Browsing files. Multi Select. Allowed file types: .jpg;.gif;.png;.bmpSWF DEBUG: Select Handler: Received the files selected from the dialog. Processing the file list...SWF DEBUG: Event: fileQueued : File ID: SWFUpload_1_0SWF DEBUG: Event: fileDialogComplete : Finished processing selected files. Files selected: 1. Files Queued: 1SWF DEBUG: StartUpload: First file in queueSWF DEBUG: Event: uploadStart : File ID: SWFUpload_0_0SWF DEBUG: StartUpload: First file in queueSWF DEBUG: Event: uploadStart : File ID: SWFUpload_1_0SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== for File ID: SWFUpload_0_0SWF DEBUG: ReturnUploadStart(): File accepted by startUpload event and readied for upload. Starting upload to /86707/listing/asynchronousuploadphoto/87085/15/E1ptdReNMwcU/cUkx4p689ChPRZYMKkLZQ== for File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_0_0SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_0_0. Bytes: 29151. Total: 29151SWF DEBUG: Event: uploadProgress (OPEN): File ID: SWFUpload_1_0SWF DEBUG: Event: uploadProgress: File ID: SWFUpload_1_0. Bytes: Total: 30218SWF DEBUG: Event: uploadSuccess: File ID: SWFUpload_0_0 Response Received: true Data: 62-Greenwich_-_Branches.pngSWF DEBUG: Event: uploadComplete : Upload cycle complete.

    Read the article

  • C# acting weird when reading in values from a file to an array

    - by Whitey
    This is the structure of my file: 1111111111111111111111111 2222222222222222222222222 3333333333333333333333333 4444444444444444444444444 5555555555555555555555555 6666666666666666666666666 7777777777777777777777777 8888888888888888888888888 9999999999999999999999999 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 0000000000000000000000000 And this is the code I'm using to read it into an array: using (StreamReader reader = new StreamReader(mapPath)) { string line; for (int i = 0; i < iMapHeight; i++) { if ((line = reader.ReadLine()) != null) { for (int j = 0; j < iMapWidth; j++) { iMap[i, j] = line[j]; } } } } I have done some debugging, and line[j] correctly iterates through each character in the currently read line. The problem lies with iMap[i, j]. After this block of code executes, this is the contents of iMap: - iMap {int[14, 25]} int[,] [0, 0] 49 int [0, 1] 49 int [0, 2] 49 int [0, 3] 49 int [0, 4] 49 int [0, 5] 49 int [0, 6] 49 int [0, 7] 49 int [0, 8] 49 int [0, 9] 49 int [0, 10] 49 int [0, 11] 49 int [0, 12] 49 int [0, 13] 49 int [0, 14] 49 int [0, 15] 49 int [0, 16] 49 int [0, 17] 49 int [0, 18] 49 int [0, 19] 49 int [0, 20] 49 int [0, 21] 49 int [0, 22] 49 int [0, 23] 49 int [0, 24] 49 int [1, 0] 50 int [1, 1] 50 int [1, 2] 50 int [1, 3] 50 int [1, 4] 50 int [1, 5] 50 int [1, 6] 50 int [1, 7] 50 int [1, 8] 50 int [1, 9] 50 int [1, 10] 50 int [1, 11] 50 int [1, 12] 50 int [1, 13] 50 int [1, 14] 50 int [1, 15] 50 int [1, 16] 50 int [1, 17] 50 int [1, 18] 50 int [1, 19] 50 int [1, 20] 50 int [1, 21] 50 int [1, 22] 50 int [1, 23] 50 int [1, 24] 50 int [2, 0] 51 int [2, 1] 51 int [2, 2] 51 int [2, 3] 51 int [2, 4] 51 int [2, 5] 51 int [2, 6] 51 int [2, 7] 51 int [2, 8] 51 int [2, 9] 51 int [2, 10] 51 int [2, 11] 51 int [2, 12] 51 int [2, 13] 51 int [2, 14] 51 int [2, 15] 51 int [2, 16] 51 int [2, 17] 51 int [2, 18] 51 int [2, 19] 51 int [2, 20] 51 int [2, 21] 51 int [2, 22] 51 int [2, 23] 51 int [2, 24] 51 int [3, 0] 52 int [3, 1] 52 int [3, 2] 52 int [3, 3] 52 int [3, 4] 52 int [3, 5] 52 int [3, 6] 52 int [3, 7] 52 int [3, 8] 52 int [3, 9] 52 int [3, 10] 52 int [3, 11] 52 int [3, 12] 52 int [3, 13] 52 int [3, 14] 52 int [3, 15] 52 int [3, 16] 52 int [3, 17] 52 int [3, 18] 52 int [3, 19] 52 int [3, 20] 52 int [3, 21] 52 int [3, 22] 52 int [3, 23] 52 int [3, 24] 52 int [4, 0] 53 int [4, 1] 53 int [4, 2] 53 int [4, 3] 53 int [4, 4] 53 int [4, 5] 53 int [4, 6] 53 int [4, 7] 53 int [4, 8] 53 int [4, 9] 53 int [4, 10] 53 int [4, 11] 53 int [4, 12] 53 int [4, 13] 53 int [4, 14] 53 int [4, 15] 53 int [4, 16] 53 int [4, 17] 53 int [4, 18] 53 int [4, 19] 53 int [4, 20] 53 int [4, 21] 53 int [4, 22] 53 int [4, 23] 53 int [4, 24] 53 int [5, 0] 54 int [5, 1] 54 int [5, 2] 54 int [5, 3] 54 int [5, 4] 54 int [5, 5] 54 int [5, 6] 54 int [5, 7] 54 int [5, 8] 54 int [5, 9] 54 int [5, 10] 54 int [5, 11] 54 int [5, 12] 54 int [5, 13] 54 int [5, 14] 54 int [5, 15] 54 int [5, 16] 54 int [5, 17] 54 int [5, 18] 54 int [5, 19] 54 int [5, 20] 54 int [5, 21] 54 int [5, 22] 54 int [5, 23] 54 int [5, 24] 54 int [6, 0] 55 int [6, 1] 55 int [6, 2] 55 int [6, 3] 55 int [6, 4] 55 int [6, 5] 55 int [6, 6] 55 int [6, 7] 55 int [6, 8] 55 int [6, 9] 55 int [6, 10] 55 int [6, 11] 55 int [6, 12] 55 int [6, 13] 55 int [6, 14] 55 int [6, 15] 55 int [6, 16] 55 int [6, 17] 55 int [6, 18] 55 int [6, 19] 55 int [6, 20] 55 int [6, 21] 55 int [6, 22] 55 int [6, 23] 55 int [6, 24] 55 int [7, 0] 56 int [7, 1] 56 int [7, 2] 56 int [7, 3] 56 int [7, 4] 56 int [7, 5] 56 int [7, 6] 56 int [7, 7] 56 int [7, 8] 56 int [7, 9] 56 int [7, 10] 56 int [7, 11] 56 int [7, 12] 56 int [7, 13] 56 int [7, 14] 56 int [7, 15] 56 int [7, 16] 56 int [7, 17] 56 int [7, 18] 56 int [7, 19] 56 int [7, 20] 56 int [7, 21] 56 int [7, 22] 56 int [7, 23] 56 int [7, 24] 56 int [8, 0] 57 int [8, 1] 57 int [8, 2] 57 int [8, 3] 57 int [8, 4] 57 int [8, 5] 57 int [8, 6] 57 int [8, 7] 57 int [8, 8] 57 int [8, 9] 57 int [8, 10] 57 int [8, 11] 57 int [8, 12] 57 int [8, 13] 57 int [8, 14] 57 int [8, 15] 57 int [8, 16] 57 int [8, 17] 57 int [8, 18] 57 int [8, 19] 57 int [8, 20] 57 int [8, 21] 57 int [8, 22] 57 int [8, 23] 57 int [8, 24] 57 int [9, 0] 48 int [9, 1] 48 int [9, 2] 48 int [9, 3] 48 int [9, 4] 48 int [9, 5] 48 int [9, 6] 48 int [9, 7] 48 int [9, 8] 48 int [9, 9] 48 int [9, 10] 48 int [9, 11] 48 int [9, 12] 48 int [9, 13] 48 int [9, 14] 48 int [9, 15] 48 int [9, 16] 48 int [9, 17] 48 int [9, 18] 48 int [9, 19] 48 int [9, 20] 48 int [9, 21] 48 int [9, 22] 48 int [9, 23] 48 int [9, 24] 48 int [10, 0] 48 int [10, 1] 48 int [10, 2] 48 int [10, 3] 48 int [10, 4] 48 int [10, 5] 48 int [10, 6] 48 int [10, 7] 48 int [10, 8] 48 int [10, 9] 48 int [10, 10] 48 int [10, 11] 48 int [10, 12] 48 int [10, 13] 48 int [10, 14] 48 int [10, 15] 48 int [10, 16] 48 int [10, 17] 48 int [10, 18] 48 int [10, 19] 48 int [10, 20] 48 int [10, 21] 48 int [10, 22] 48 int [10, 23] 48 int [10, 24] 48 int [11, 0] 48 int [11, 1] 48 int [11, 2] 48 int [11, 3] 48 int [11, 4] 48 int [11, 5] 48 int [11, 6] 48 int [11, 7] 48 int [11, 8] 48 int [11, 9] 48 int [11, 10] 48 int [11, 11] 48 int [11, 12] 48 int [11, 13] 48 int [11, 14] 48 int [11, 15] 48 int [11, 16] 48 int [11, 17] 48 int [11, 18] 48 int [11, 19] 48 int [11, 20] 48 int [11, 21] 48 int [11, 22] 48 int [11, 23] 48 int [11, 24] 48 int [12, 0] 48 int [12, 1] 48 int [12, 2] 48 int [12, 3] 48 int [12, 4] 48 int [12, 5] 48 int [12, 6] 48 int [12, 7] 48 int [12, 8] 48 int [12, 9] 48 int [12, 10] 48 int [12, 11] 48 int [12, 12] 48 int [12, 13] 48 int [12, 14] 48 int [12, 15] 48 int [12, 16] 48 int [12, 17] 48 int [12, 18] 48 int [12, 19] 48 int [12, 20] 48 int [12, 21] 48 int [12, 22] 48 int [12, 23] 48 int [12, 24] 48 int [13, 0] 48 int [13, 1] 48 int [13, 2] 48 int [13, 3] 48 int [13, 4] 48 int [13, 5] 48 int [13, 6] 48 int [13, 7] 48 int [13, 8] 48 int [13, 9] 48 int [13, 10] 48 int [13, 11] 48 int [13, 12] 48 int [13, 13] 48 int [13, 14] 48 int [13, 15] 48 int [13, 16] 48 int [13, 17] 48 int [13, 18] 48 int [13, 19] 48 int [13, 20] 48 int [13, 21] 48 int [13, 22] 48 int [13, 23] 48 int [13, 24] 48 int Sorry for the lame formatting, but it's huge :P I have no idea where it's getting these values from, does anyone have an explanation? Thanks :)

    Read the article

  • Fedora error log file

    - by user111196
    I am running a java application using this wrapper service yajsw. The problem it just stopped without any error in its logs file. So I was wondering will there be any system log file which will indicate the cause of it going down? Partial of the log file. Apr 6 00:12:20 localhost kernel: imklog 3.22.1, log source = /proc/kmsg started. Apr 6 00:12:20 localhost rsyslogd: [origin software="rsyslogd" swVersion="3.22.1" x-pid="2234" x-info="http://www.rsyslog.com"] (re)start Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys cpuset Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys cpu Apr 6 00:12:20 localhost kernel: Linux version 2.6.27.41-170.2.117.fc10.x86_64 ([email protected]) (gcc version 4.3.2 20081105 (Red Hat 4.3.2-7) (GCC) ) #1 SMP Thu Dec 10 10:36:29 EST 2009 Apr 6 00:12:20 localhost kernel: Command line: ro root=UUID=722ebf87-437f-4634-9c68-a82d157fa948 rhgb quiet Apr 6 00:12:20 localhost kernel: KERNEL supported cpus: Apr 6 00:12:20 localhost kernel: Intel GenuineIntel Apr 6 00:12:20 localhost kernel: AMD AuthenticAMD Apr 6 00:12:20 localhost kernel: Centaur CentaurHauls Apr 6 00:12:20 localhost kernel: BIOS-provided physical RAM map: Apr 6 00:12:20 localhost kernel: BIOS-e820: 0000000000000000 - 00000000000a0000 (usable) Apr 6 00:12:20 localhost kernel: BIOS-e820: 0000000000100000 - 00000000cfb50000 (usable) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000cfb50000 - 00000000cfb66000 (reserved) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000cfb66000 - 00000000cfb85c00 (ACPI data) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000cfb85c00 - 00000000d0000000 (reserved) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved) Apr 6 00:12:20 localhost kernel: BIOS-e820: 00000000fe000000 - 0000000100000000 (reserved) Apr 6 00:12:20 localhost kernel: BIOS-e820: 0000000100000000 - 0000000330000000 (usable) Apr 6 00:12:20 localhost kernel: DMI 2.5 present. Apr 6 00:12:20 localhost kernel: last_pfn = 0x330000 max_arch_pfn = 0x3ffffffff Apr 6 00:12:20 localhost kernel: x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 Apr 6 00:12:20 localhost kernel: last_pfn = 0xcfb50 max_arch_pfn = 0x3ffffffff Apr 6 00:12:20 localhost kernel: init_memory_mapping Apr 6 00:12:20 localhost kernel: last_map_addr: cfb50000 end: cfb50000 Apr 6 00:12:20 localhost kernel: init_memory_mapping Apr 6 00:12:20 localhost kernel: last_map_addr: 330000000 end: 330000000 Apr 6 00:12:20 localhost kernel: RAMDISK: 37bfc000 - 37fef6c8 Apr 6 00:12:20 localhost kernel: ACPI: RSDP 000F21B0, 0024 (r2 DELL ) Apr 6 00:12:20 localhost kernel: ACPI: XSDT 000F224C, 0084 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: FACP CFB83524, 00F4 (r3 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: DSDT CFB66000, 4974 (r1 DELL PE_SC3 1 INTL 20050624) Apr 6 00:12:20 localhost kernel: ACPI: FACS CFB85C00, 0040 Apr 6 00:12:20 localhost kernel: ACPI: APIC CFB83078, 00B6 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: SPCR CFB83130, 0050 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: HPET CFB83184, 0038 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: MCFG CFB831C0, 003C (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: WD__ CFB83200, 0134 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: SLIC CFB83338, 0176 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: ERST CFB6AAF4, 0210 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: HEST CFB6AD04, 027C (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: BERT CFB6A974, 0030 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: EINJ CFB6A9A4, 0150 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: ACPI: TCPA CFB834BC, 0064 (r1 DELL PE_SC3 1 DELL 1) Apr 6 00:12:20 localhost kernel: No NUMA configuration found Apr 6 00:12:20 localhost kernel: Faking a node at 0000000000000000-0000000330000000 Apr 6 00:12:20 localhost kernel: Bootmem setup node 0 0000000000000000-0000000330000000 Apr 6 00:12:20 localhost kernel: NODE_DATA [0000000000015000 - 0000000000029fff] Apr 6 00:12:20 localhost kernel: bootmap [000000000002a000 - 000000000008ffff] pages 66 Apr 6 00:12:20 localhost kernel: (7 early reservations) ==> bootmem [0000000000 - 0330000000] Apr 6 00:12:20 localhost kernel: #0 [0000000000 - 0000001000] BIOS data page ==> [0000000000 - 0000001000] Apr 6 00:12:20 localhost kernel: #1 [0000006000 - 0000008000] TRAMPOLINE ==> [0000006000 - 0000008000] Apr 6 00:12:20 localhost kernel: #2 [0000200000 - 0000a310cc] TEXT DATA BSS ==> [0000200000 - 0000a310cc] Apr 6 00:12:20 localhost kernel: #3 [0037bfc000 - 0037fef6c8] RAMDISK ==> [0037bfc000 - 0037fef6c8] Apr 6 00:12:20 localhost kernel: #4 [000009f000 - 0000100000] BIOS reserved ==> [000009f000 - 0000100000] Apr 6 00:12:20 localhost kernel: #5 [0000008000 - 000000c000] PGTABLE ==> [0000008000 - 000000c000] Apr 6 00:12:20 localhost kernel: #6 [000000c000 - 0000015000] PGTABLE ==> [000000c000 - 0000015000] Apr 6 00:12:20 localhost kernel: found SMP MP-table at [ffff8800000fe710] 000fe710 Apr 6 00:12:20 localhost kernel: Zone PFN ranges: Apr 6 00:12:20 localhost kernel: DMA 0x00000000 -> 0x00001000 Apr 6 00:12:20 localhost kernel: DMA32 0x00001000 -> 0x00100000 Apr 6 00:12:20 localhost kernel: Normal 0x00100000 -> 0x00330000 Apr 6 00:12:20 localhost kernel: Movable zone start PFN for each node Apr 6 00:12:20 localhost kernel: early_node_map[3] active PFN ranges Apr 6 00:12:20 localhost kernel: 0: 0x00000000 -> 0x000000a0 Apr 6 00:12:20 localhost kernel: 0: 0x00000100 -> 0x000cfb50 Apr 6 00:12:20 localhost kernel: 0: 0x00100000 -> 0x00330000 Apr 6 00:12:20 localhost kernel: ACPI: PM-Timer IO Port: 0x808 Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x02] lapic_id[0x04] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x06] lapic_id[0x05] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x07] lapic_id[0x03] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled) Apr 6 00:12:20 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1]) Apr 6 00:12:20 localhost kernel: ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0]) Apr 6 00:12:20 localhost kernel: IOAPIC[0]: apic_id 8, version 0, address 0xfec00000, GSI 0-23 Apr 6 00:12:20 localhost kernel: ACPI: IOAPIC (id[0x09] address[0xfec81000] gsi_base[64]) Apr 6 00:12:20 localhost kernel: IOAPIC[1]: apic_id 9, version 0, address 0xfec81000, GSI 64-87 Apr 6 00:12:20 localhost kernel: ACPI: IOAPIC (id[0x0a] address[0xfec84000] gsi_base[160]) Apr 6 00:12:20 localhost kernel: IOAPIC[2]: apic_id 10, version 0, address 0xfec84000, GSI 160-183 Apr 6 00:12:20 localhost kernel: ACPI: IOAPIC (id[0x0b] address[0xfec84800] gsi_base[224]) Apr 6 00:12:20 localhost kernel: IOAPIC[3]: apic_id 11, version 0, address 0xfec84800, GSI 224-247 Apr 6 00:12:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 6 00:12:20 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 6 00:12:20 localhost kernel: Setting APIC routing to flat Apr 6 00:12:20 localhost kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 6 00:12:20 localhost kernel: Using ACPI (MADT) for SMP configuration information Apr 6 00:12:20 localhost kernel: SMP: Allowing 8 CPUs, 0 hotplug CPUs Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000000a0000 - 0000000000100000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000cfb50000 - 00000000cfb66000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000cfb66000 - 00000000cfb85000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000cfb85000 - 00000000cfb86000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000cfb86000 - 00000000d0000000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000d0000000 - 00000000e0000000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000e0000000 - 00000000f0000000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000f0000000 - 00000000fe000000 Apr 6 00:12:20 localhost kernel: PM: Registered nosave memory: 00000000fe000000 - 0000000100000000 Apr 6 00:12:20 localhost kernel: Allocating PCI resources starting at d1000000 (gap: d0000000:10000000) Apr 6 00:12:20 localhost kernel: PERCPU: Allocating 65184 bytes of per cpu data Apr 6 00:12:20 localhost kernel: Built 1 zonelists in Zone order, mobility grouping on. Total pages: 3096524 Apr 6 00:12:20 localhost kernel: Policy zone: Normal Apr 6 00:12:20 localhost kernel: Kernel command line: ro root=UUID=722ebf87-437f-4634-9c68-a82d157fa948 rhgb quiet Apr 6 00:12:20 localhost kernel: Initializing CPU#0 Apr 6 00:12:20 localhost kernel: PID hash table entries: 4096 (order: 12, 32768 bytes) Apr 6 00:12:20 localhost kernel: Extended CMOS year: 2000 Apr 6 00:12:20 localhost kernel: TSC: PIT calibration confirmed by PMTIMER. Apr 6 00:12:20 localhost kernel: TSC: using PMTIMER calibration value Apr 6 00:12:20 localhost kernel: Detected 1994.992 MHz processor. Apr 6 00:12:20 localhost kernel: Console: colour VGA+ 80x25 Apr 6 00:12:20 localhost kernel: console [tty0] enabled Apr 6 00:12:20 localhost kernel: Checking aperture... Apr 6 00:12:20 localhost kernel: No AGP bridge found Apr 6 00:12:20 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 6 00:12:20 localhost kernel: Placing software IO TLB between 0x20000000 - 0x24000000 Apr 6 00:12:20 localhost kernel: Memory: 12324244k/13369344k available (3311k kernel code, 253484k reserved, 1844k data, 1296k init) Apr 6 00:12:20 localhost kernel: SLUB: Genslabs=13, HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Apr 6 00:12:20 localhost kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 3989.98 BogoMIPS (lpj=1994992) Apr 6 00:12:20 localhost kernel: Security Framework initialized Apr 6 00:12:20 localhost kernel: SELinux: Initializing. Apr 6 00:12:20 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes) Apr 6 00:12:20 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes) Apr 6 00:12:20 localhost kernel: Mount-cache hash table entries: 256 Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys ns Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys cpuacct Apr 6 00:12:20 localhost kernel: Initializing cgroup subsys devices Apr 6 00:12:20 localhost kernel: CPU: L1 I cache: 32K, L1 D cache: 32K Apr 6 00:12:20 localhost kernel: CPU: L2 cache: 4096K Apr 6 00:12:20 localhost kernel: CPU 0/0 -> Node 0 Apr 6 00:12:20 localhost kernel: CPU: Physical Processor ID: 0 Apr 6 00:12:20 localhost kernel: CPU: Processor Core ID: 0 Apr 6 00:12:20 localhost kernel: CPU0: Thermal monitoring enabled (TM1) Apr 6 00:12:20 localhost kernel: using mwait in idle threads. Apr 6 00:12:20 localhost kernel: ACPI: Core revision 20080609 Apr 6 00:12:20 localhost kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 6 00:12:20 localhost kernel: CPU0: Intel(R) Xeon(R) CPU E5335 @ 2.00GHz stepping 07 Apr 6 00:12:20 localhost kernel: Using local APIC timer interrupts. Apr 6 00:12:20 localhost kernel: Detected 20.781 MHz APIC timer. Apr 6 00:12:20 localhost kernel: Booting processor 1/4 ip 6000 Apr 6 00:12:20 localhost kernel: Initializing CPU#1 Apr 6 00:12:20 localhost kernel: Calibrating delay using timer specific routine.. 3990.05 BogoMIPS (lpj=1995026) Apr 6 00:12:20 localhost kernel: CPU: L1 I cache: 32K, L1 D cache: 32K Apr 6 00:12:20 localhost kernel: CPU: L2 cache: 4096K Apr 6 00:12:20 localhost kernel: CPU 1/4 -> Node 0 Apr 6 00:12:20 localhost kernel: CPU: Physical Processor ID: 1 Apr 6 00:12:20 localhost kernel: CPU: Processor Core ID: 0 Apr 6 00:12:20 localhost kernel: CPU1: Thermal monitoring enabled (TM2) Apr 6 00:12:20 localhost kernel: x86 PAT enabled: cpu 1, old 0x7040600070406, new 0x7010600070106 Apr 6 00:12:20 localhost kernel: CPU1: Intel(R) Xeon(R) CPU E5335 @ 2.00GHz stepping 07 Apr 6 00:12:20 localhost kernel: checking TSC synchronization [CPU#0 -> CPU#1]: passed. Apr 6 00:12:20 localhost kernel: Booting processor 2/2 ip 6000 Apr 6 00:12:20 localhost kernel: Initializing CPU#2 Apr 6 00:12:20 localhost kernel: Calibrating delay using timer specific routine.. 3990.05 BogoMIPS (lpj=1995029)

    Read the article

  • CPU & Memory Usage Log & Performance

    - by wittythotha
    I want to have an idea of the amount of CPU and memory that is being used. I have a website hosted using IIS, and have clients connecting to it. I want to find out the amount of load that the CPU, RAM and the network has when multiple clients connect. I tried out using tools like Fiddler, the inbuilt Resource Manager, and also some other applications I found on the internet. I just want to keep track of all these data in a file, so I can plot out a graph and find out how the CPU, etc. is performing. I read a few other posts, but didn't find anything that solves the problem. Is there good CPU / Memory Logging tool available, just to plot a graph of the usage, etc.? EDIT : I want to know of some tool that can save the performance details in a log file, so that I can use it to plot a graph, etc.

    Read the article

  • how to log trigger intiated CRUD queries on postgresql 8.4

    - by user47650
    in the postgresql config file there is an option log_statement = 'mod' which causes CRUD statements to be logged. However this does not include the CRUD statements that are called from trigger functions, which means that following the log file is not useful to determine what changes are being made to the database data, if the 3rd part application has been making lots of changes using triggers. Is there some other option I can use to include trigger CRUD? alternatively, can I inspect the pg_xlog in real time using some tool? (xlogdump and xlogviewer do not work with version 8.4, i have tried)

    Read the article

  • SWATCH - what am I doing wrong?

    - by Brian Dunbar
    What I want/need/desire is to log when a user logs into my FTP server. Problem: I can't make swatch work the way I should be able to. This data is logged to a file - but of course these logs are not kept very long. I can't keep the logs around forever, but I can extract data from then, analyze it, store results elsewhere. If there is a better way to do this than the following, I'm all ears. Swatch version 3.2.3 Perl 5.12 FTP: VSFTP OS (Test): OS X 10.6.8 OS (Production): Solaris From man I see I can pass contents to a command .. so I should be able to echo those values to file, do a sed/cut/uniq thing on them for stats. $ man swatch (snip) exec command Execute command. The command may contain variables which are substituted with fields from the matched line. A $N will be replaced by the Nth field in the line. A $0 or $* will be replaced by the entire line. Swatch file .swatchrc watchfor /OK LOGIN/ echo=red pipe "echo "0: $0 1:$1 2:$2 3:$3 4:$4 5:$5" >> /Users/bdunbar/dev/ftplog/output.txt" Launch with $ swatch -c /Users/bdunbar/.swatchrc --script-dir /Users/bdunbar/dev/ftplog -t /Users/bdunbar/dev/ftplog/vsftpd.log & Test echo "Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client "206.209.255.227"" >> vsftpd.log Results - it's echoing to TTY. This is not needed or desired on the server, but it does tell me things are working. ftplog *** swatch version 3.2.3 (pid:25780) started at Mon Jul 9 15:23:33 CDT 2012 Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client 206.209.255.227 Results - bad! I appear to not be sending the variables to text. $ tail -f output.txt 0: /Users/bdunbar/dev/ftplog/.swatch_script.25780 1: 2: 3: 4: 5:

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • CPU/RAM usage log over a period of time to file on CentOS

    - by joel_gil
    Hi everyone Im looking for an app pr line of code that could let me observe a process, save the info in a number of variable and then put the gathered info on a file. Ive been trying with variations of top but no luck. I am running several CentOS virtual servers, VM is 2gb ram 2 processor. Maybe a script that works over a specified amount of time while writing lines with the info on a text file so at the end i can have a sort of table with the data. The thing is Im going to stress test the server and I would like to have the data to make some statistics. Any comments and suggestions are most welcome.

    Read the article

  • Measuring custom statistics with sar

    - by Will Glass
    I have a server application which I think is leaking file handles. I want to track the usage of file descriptors over time on my Linux (ubuntu) server. I've figured out that I can track the number of file descriptors in use by a process with lsof -p `pgrep the-process-name` | wc -l Since I'm already using sysstat and sar to track various metrics, I thought it'd be nice to display with sar. I want to measure this every 10 minutes. Is it possible to add a custom metric to sar? Then I can easily report it out. If not, I'll write a simple cron job to collect this data and store it separately in a log file.

    Read the article

  • Parsing a XML File and Replacing Chosen Node With Values From Text File

    - by transmogrify
    I wrote a C# Winforms program to take a XML file and load values from a text file into each occurrence of that field which the user specifies on the UI. For whatever reason, the program inserts a carriage return on any nodes which don't contain a value. For example, it will do that to <example></example> whereas it will not misbehave on something like <country>USA</country> What is causing it to do this and how can I prevent it? Here is the code from the part which handles this functionality. XmlDocument LoadXmlDoc = new XmlDocument(); StreamReader sr = File.OpenText(DataLoadTxtBx.Text); string InputFromTxtFile; LoadXmlDoc.Load(XmlPath.Text); XmlNodeList NodeToCreateOrReplace = LoadXmlDoc.GetElementsByTagName(XmlTagNameTxtBx.Text); foreach (XmlNode SelectedNode in NodeToCreateOrReplace) { if ((InputFromTxtFile = sr.ReadLine()) != null) { SelectedNode.InnerText = InputFromTxtFile; } } sr.Close(); LoadXmlDoc.Save(XmlPath.Text);

    Read the article

  • How to Exclude poperties file from jar file ?

    - by Nisarg Mehta
    Hi All, I have a java application for exaple see below. myProject | |----src | | | |--main | | | |--resources | | | |--userConfig.properties | |--log4j.properties | |---target I am using Maven to build my project .I am using maven command to build jar file. mvn package -DMaven.test.skip=true I want to exclude userConfig.properties file from my jar file so i have mentioned in pom.xml as below. <excludes> <exclude>**/userConfig.properties</exclude> </excludes> but it will exclude from the target folder in which compile code resides. And Application will not run because it will not find the userConfig.properties. Can anyone help me ? Thanks Nisarg Mehta

    Read the article

  • SFTP transfer file and move file to folder

    - by molecule
    Hi all, This is my first post so please excuse my ignorance. I am using a vbscript to zip all .csv type files in a particular folder. After some google searches, I have found a workable vbscript to do this and have enabled a scheduled task to automate this. What I need to do next is to transfer the zip file via sftp and then "move" that zip file into another folder. I believe the former can be achieved using pscp.exe via command line but can someone show me how to do the latter? Basically the zipping will be done twice a day and so it will have a timestamp similar to yyyymmdd0900.zip (for 9am schedule) and yyyymmdd1800.zip (for 6pm schedule). After the transfer, I want to move (not copy) the zip file generated into another folder. Any pointers would be greatly appreciated. Thank you all in advance.

    Read the article

  • Windows batch file to list folders that have a specific file in them

    - by Lee
    I'm trying to create a file that has a list of directories that have a specific file name in them. Let's say I'm trying to find directories that have a file named *.joe in them. I initially tried just a simple dir /ad *.joe > dir_list.txt , but it searches the directory names for *.joe, so no go. Then I concluded that a for loop was probably my best bet. I started with for /d /r %a in ('dir *.joe /b') do @echo %a >> dir_list.txt and it looked like it wasn't executing the dir command. I added the "usebackq", but that seems to only work for the /F command extension. Ideas?

    Read the article

  • WIX - Verify that file exists - and/or file-browser dialog/button

    - by NealWalters
    How do you create a "Browse" button in a WIX dialog. I currently have a custom dialog box with four radio buttons (Dev, QA, Stage, and Prod), and a text field for a filename. The install of course dies if the user enters a bad filename. I would be happy first of all just to verify that the file they entered exists. Secondly, I would like to add a File-Browser button, if such things exists in WIX. But even then, I would imagine the user could type in any file name, and I should still check to see if it exists. Thanks, Neal Walters

    Read the article

  • File location for config file to app installed to local user location

    - by user54064
    I have a WinForm app that has inside of the app's config file locations to find files for the app to use as it runs. However, the app will be installed locally for each user so it can't be hard coded. For Vista and Windows 7 the installer puts the app in c:\users\\Documents area. Under Windows XP it puts it in a different location. How can I write the config file to use some sort of placeholder to be filled in at runtime with the specific user's information? I am just using the default areas that Windows wants to install a per-user install but need the config file to be flexible at runtime for the specific user.

    Read the article

  • Is there a way to "freeze" a file in Git?

    - by Suan
    I'm in a situation where I want to open source my project, however there's a single source file that I want to release a "clean" version of, but use a separate version locally. Does git have a feature where I can just commit a file once, and it stops looking for changes on that file from now on? I've tried adding the file to .gitignore, but after the first time when I do a git add -f and git commit on the file, and I proceed to edit it again, git status shows the file as changed. The ideal behavior would be for git to not show this file as changed from now on, even though I've edited it. I'd also be interested in how others have dealt with "scrubbing" their codebases of private code/data before pushing to an open source repo, especially on Git.

    Read the article

  • How do I dynamically load a js file using Prototype?

    - by domagoj412
    Hello, I am using prototype to load external js file (actually it is php file) dynamically. Like this: function UpdateJS(file) { var url = 'main_js.php?file='+file; var myAjax = new Ajax.Request( url, {method: 'get', onComplete: showResponseHeader} ); } function showResponseHeader (originalRequest) { $('jscode').innerHTML = originalRequest.responseText; } Container "jscode" is defined like this: <script type="text/javascript" id="jscode"></script> And it works! But if some different file is called, all the functions from previous one are preserved. And I don't want that. Anybody knows how to "unload" first js file when second one is called? (I also tried using Ajax.Updater function but the result is the same.) Update: It turns out that there is bigger problem: it only loads if function "UpdateJS" is in window.onload that is why it doesn't load anything else after that. So prototypes update it's maybe not such a good way for this...

    Read the article

  • How to Return A File and a strongly Typed data at the same time?

    - by chobo2
    Hi I am using asp.net mvc 1.0 and I want to return a XML file but I also want to return a strongly typed data back so I can update some fields. Like the XML file will contain users who failed to be inserted into the database. So I want that to appear as a dialog save box what asp.net mvc return file() would do. However I also want to return on the page like values like how many users failed to be added, how many users where added, etc. So I want to use scafolding with the class file I want to pass it along. If this was a view I could pass it along as an object model but I don't see a parameter for that in File(). I also don't want to save the xml file onto the harddrive I want to do it through memory. So have a link that would display on the page to download the file and show the the data I want would not be desired.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >