Search Results

Search found 26093 results on 1044 pages for 'process monitor'.

Page 154/1044 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • WebCenter Customer Spotlight: Azul Brazilian Airlines

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) is the third-largest airline in Brazil serving  42 destinations with a fleet of 49 aircraft and employs 4,500 crew members. The company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. To this end, Azul implemented Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Azul can now complete the Web site content updating process—which used to take approximately 48 hours—in less than five minutes. Company OverviewAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) has established itself as the third-largest airline in Brazil, based on a business model that combines low prices with a high level of service. Azul serves 42 destinations with a fleet of 49 aircraft. It operates 350 daily flights with a team of 4,500 crew members. Last year, the company transported 15 million passengers, achieving a 10% share of the Brazilian market, according to the Agência Nacional de Aviação Civil (ANAC, or the National Civil Aviation Agency). Business ChallengesThe company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. Provide customers with an  innovative Web site with a simple process for purchasing flight tickets Bring dynamism to the Web site’s content updating process to provide autonomy to the airline’s strategic departments, such as marketing and product development Facilitate integration among the site’s different application providers, such as ticket availability and payment process, on which ticket sales depend Solution DeployedAzul worked with the  Oracle partner TQI to implement Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Previously, at least three servers and corporate information environments had directed data to the portal. The single Oracle-based platform now facilitates site updates, which are daily and constant. Business Results Gained development freedom in all processes—from implementation to content editing Gathered all of the Web site’s key information onto a single platform, facilitating its daily and constant updating, whereas the information was previously spread among at least three IT environments and had to go through a complex process to be made available online to customers Reduced time needed to update banners and other Web site content from an average of 48 hours to less than five minutes Simplified the flight ticket sales process thanks to tool flexibility that enabled the company to improve Website usability “Oracle WebCenter Sites provides an easy-to-use platform that enables our marketing department to spend less time updating content and more time on innovative activities. Previously, it would take 48 hours to update content on our Web site; now it takes less than five minutes. We have shown the market that we are innovators, enabling customer convenience through an improved flight ticket purchase process.” Kleber Linhares, Information Technology and E-Commerce Director, Azul Linhas Aéreas Brasileiras Additional Information Azul Brazilian Airlines Case Study Oracle WebCenter Sites Oracle WebCenter Sites Satellite Server

    Read the article

  • ACPI Events not triggered after booting

    - by user3208188
    I purchased a Convertible (Laptop and Tablet, Ubuntu 14.04 installed) and would like to execute a script when changing from laptop to tablet mode and the other way around. So I created two new files within the /etc/acpi/events folder corresponding to two new scripts in the /etc/acpi folder. Everything works well if I create a new acpid process (e.g "sudo acpid -c /etc/acpi/events -s /var/run/acpid.socket") manually. However, if I do not initiate this process manually the scripts are not executed, even though there is a process running with the same options right after the machine has booted by default. The only difference I can find is the parent pid. If I create the process the parent is "init --user", while the parent of the automatically initiated process is "/sbin/init". Can anyone help me by explaining why this issue happens and how I can avoid to start an acpid process manually every time I boot the device?

    Read the article

  • c - fork() and wait()

    - by Joe
    Hi there, I need to use the fork() and wait() functions to complete an assignment. We are modelling non-deterministic behaviour and need the program to fork() if there is more than one possible transition. In order to try and work out how fork and wait work, I have just made a simple program. I think I understand now how the calls work and would be fine if the program only branched once because the parent process could use the exit status from the single child process to determine whether the child process reached the accept state or not. As you can see from the code that follows though, I want to be able to handle situations where there must be more than one child processes. My problem is that you seem to only be able to set the status using an _exit function once. So, as in my example the exit status that the parent process tests for shows that the first child process issued 0 as it's exit status, but has no information on the second child process. I tried simply not _exit()-ing on a reject, but then that child process would carry on, and in effect there would seem to be two parent processes. Sorry for the waffle, but I would be grateful if someone could tell me how my parent process could obtain the status information on more than one child process, or I would be happy for the parent process to only notice accept status's from the child processes, but in that case I would successfully need to exit from the child processes which have a reject status. My test code is as follows: #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <errno.h> #include <sys/wait.h> int main(void) { pid_t child_pid, wpid, pid; int status = 0; int i; int a[3] = {1, 2, 1}; for(i = 1; i < 3; i++) { printf("i = %d\n", i); pid = getpid(); printf("pid after i = %d\n", pid); if((child_pid = fork()) == 0) { printf("In child process\n"); pid = getpid(); printf("pid in child process is %d\n", pid); /* Is a child process */ if(a[i] < 2) { printf("Should be accept\n"); _exit(1); } else { printf("Should be reject\n"); _exit(0); } } } if(child_pid > 0) { /* Is the parent process */ pid = getpid(); printf("parent_pid = %d\n", pid); wpid = wait(&status); if(wpid != -1) { printf("Child's exit status was %d\n", status); if(status > 0) { printf("Accept\n"); } else { printf("Complete parent process\n"); if(a[0] < 2) { printf("Accept\n"); } else { printf("Reject\n"); } } } } return 0; } Many thanks Joe

    Read the article

  • MVC SiteMap - when different nodes point to same action SiteMap.CurrentNode does not map to the correct route

    - by awrigley
    Setup: I am using ASP.NET MVC 4, with mvcSiteMapProvider to manage my menus. I have a custom menu builder that evaluates whether a node is on the current branch (ie, if the SiteMap.CurrentNode is either the CurrentNode or the CurrentNode is nested under it). The code is included below, but essentially checks the url of each node and compares it with the url of the currentnode, up through the currentnodes "family tree". The CurrentBranch is used by my custom menu builder to add a class that highlights menu items on the CurrentBranch. The Problem: My custom menu works fine, but I have found that the mvcSiteMapProvider does not seem to evaluate the url of the CurrentNode in a consistent manner: When two nodes point to the same action and are distinguished only by a parameter of the action, SiteMap.CurrentNode does not seem to use the correct route (it ignores the distinguishing parameter and defaults to the first route that that maps to the action defined in the node). Example of the Problem: In an app I have Members. A Member has a MemberStatus field that can be "Unprocessed", "Active" or "Inactive". To change the MemberStatus, I have a ProcessMemberController in an Area called Admin. The processing is done using the Process action on the ProcessMemberController. My mvcSiteMap has two nodes that BOTH map to the Process action. The only difference between them is the alternate parameter (such are my client's domain semantics), that in one case has a value of "Processed" and in the other "Unprocessed": Nodes: <mvcSiteMapNode title="Process" area="Admin" controller="ProcessMembers" action="Process" alternate="Unprocessed" /> <mvcSiteMapNode title="Change Status" area="Admin" controller="ProcessMembers" action="Process" alternate="Processed" /> Routes: The corresponding routes to these two nodes are (again, the only thing that distinguishes them is the value of the alternate parameter): context.MapRoute( "Process_New_Members", "Admin/Unprocessed/Process/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Unprocessed", MemberId = UrlParameter.Optional } ); context.MapRoute( "Change_Status_Old_Members", "Admin/Members/Status/Change/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Processed", MemberId = UrlParameter.Optional } ); What works: The Html.ActionLink helper uses the routes and produces the urls I expect: @Html.ActionLink("Process", MVC.Admin.ProcessMembers.Process(item.MemberId, "Unprocessed") // Output (alternate="Unprocessed" and item.MemberId = 12): Admin/Unprocessed/Process/12 @Html.ActionLink("Status", MVC.Admin.ProcessMembers.Process(item.MemberId, "Processed") // Output (alternate="Processed" and item.MemberId = 23): Admin/Members/Status/Change/23 In both cases the output is correct and as I expect. What doesn't work: Let's say my request involves the second option, ie, /Admin/Members/Status/Change/47, corresponding to alternate = "Processed" and a MemberId of 47. Debugging my static CurrentBranch property (see below), I find that SiteMap.CurrentNode shows: PreviousSibling: null Provider: {MvcSiteMapProvider.DefaultSiteMapProvider} ReadOnly: false ResourceKey: "" Roles: Count = 0 RootNode: {Home} Title: "Process" Url: "/Admin/Unprocessed/Process/47" Ie, for a request url of /Admin/Members/Status/Change/47, SiteMap.CurrentNode.Url evaluates to /Admin/Unprocessed/Process/47. Ie, it is ignorning the alternate parameter and using the wrong route. CurrentBranch Static Property: /// <summary> /// ReadOnly. Gets the Branch of the Site Map that holds the SiteMap.CurrentNode /// </summary> public static List<SiteMapNode> CurrentBranch { get { List<SiteMapNode> currentBranch = null; if (currentBranch == null) { SiteMapNode cn = SiteMap.CurrentNode; SiteMapNode n = cn; List<SiteMapNode> ln = new List<SiteMapNode>(); if (cn != null) { while (n != null && n.Url != SiteMap.RootNode.Url) { // I don't need to check for n.ParentNode == null // because cn != null && n != SiteMap.RootNode ln.Add(n); n = n.ParentNode; } // the while loop excludes the root node, so add it here // I could add n, that should now be equal to SiteMap.RootNode, but this is clearer ln.Add(SiteMap.RootNode); // The nodes were added in reverse order, from the CurrentNode up, so reverse them. ln.Reverse(); } currentBranch = ln; } return currentBranch; } } The Question: What am I doing wrong? The routes are interpreted by Html.ActionLlink as I expect, but are not evaluated by SiteMap.CurrentNode as I expect. In other words, in evaluating my routes, SiteMap.CurrentNode ignores the distinguishing alternate parameter.

    Read the article

  • Most useful AutoHotkey scripts?

    - by Jon Erickson
    What AutoHotkey scripts have you found that proved to be extremely useful? I'll share one that I am using for multiple monitors. Windows + Right: Move window from the left monitor to the right monitor #right:: WinGet, mm, MinMax, A WinRestore, A WinGetPos, X, Y,,,A WinMove, A,, X+A_ScreenWidth, Y if(mm = 1){ WinMaximize, A } return Windows + Left: Move window from the right monitor to the left monitor #left:: WinGet, mm, MinMax, A WinRestore, A WinGetPos, X, Y,,,A WinMove, A,, X-A_ScreenWidth, Y if(mm = 1){ WinMaximize, A } return

    Read the article

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • Corosync :: Restarting some resources after Lan connectivity issue

    - by moebius_eye
    I am currently looking into corosync to build a two-node cluster. So, I've got it working fine, and it does what I want to do, which is: Lost connectivity between the two nodes gives the first node '10node' both Failover Wan IPs. (aka resources WanCluster100 and WanCluster101 ) '11node' does nothing. He "thinks" he still has his Failover Wan IP. (aka WanCluster101) But it doesn't do this: '11node' should restart the WanCluster101 resource when the connectivity with the other node is back. This is to prevent a condition where node10 simply dies (and thus does not get 11node's Failover Wan IP), resulting in a situation where none of the nodes have 10node's failover IP because 10node is down 11node has "given back" his failover Wan IP. Here's the current configuration I'm working on. node 10sch \ attributes standby="off" node 11sch \ attributes standby="off" primitive LanCluster100 ocf:heartbeat:IPaddr2 \ params ip="172.25.0.100" cidr_netmask="32" nic="eth3" \ op monitor interval="10s" \ meta is-managed="true" target-role="Started" primitive LanCluster101 ocf:heartbeat:IPaddr2 \ params ip="172.25.0.101" cidr_netmask="32" nic="eth3" \ op monitor interval="10s" \ meta is-managed="true" target-role="Started" primitive Ping100 ocf:pacemaker:ping \ params host_list="192.0.2.1" multiplier="500" dampen="15s" \ op monitor interval="5s" \ meta target-role="Started" primitive Ping101 ocf:pacemaker:ping \ params host_list="192.0.2.1" multiplier="500" dampen="15s" \ op monitor interval="5s" \ meta target-role="Started" primitive WanCluster100 ocf:heartbeat:IPaddr2 \ params ip="192.0.2.100" cidr_netmask="32" nic="eth2" \ op monitor interval="10s" \ meta target-role="Started" primitive WanCluster101 ocf:heartbeat:IPaddr2 \ params ip="192.0.2.101" cidr_netmask="32" nic="eth2" \ op monitor interval="10s" \ meta target-role="Started" primitive Website0 ocf:heartbeat:apache \ params configfile="/etc/apache2/apache2.conf" options="-DSSL" \ operations $id="Website-one" \ op start interval="0" timeout="40" \ op stop interval="0" timeout="60" \ op monitor interval="10" timeout="120" start-delay="0" statusurl="http://127.0.0.1/server-status/" \ meta target-role="Started" primitive Website1 ocf:heartbeat:apache \ params configfile="/etc/apache2/apache2.conf.1" options="-DSSL" \ operations $id="Website-two" \ op start interval="0" timeout="40" \ op stop interval="0" timeout="60" \ op monitor interval="10" timeout="120" start-delay="0" statusurl="http://127.0.0.1/server-status/" \ meta target-role="Started" group All100 WanCluster100 LanCluster100 group All101 WanCluster101 LanCluster101 location AlwaysPing100WithNode10 Ping100 \ rule $id="AlWaysPing100WithNode10-rule" inf: #uname eq 10sch location AlwaysPing101WithNode11 Ping101 \ rule $id="AlWaysPing101WithNode11-rule" inf: #uname eq 11sch location NeverLan100WithNode11 LanCluster100 \ rule $id="RAND1083308" -inf: #uname eq 11sch location NeverPing100WithNode11 Ping100 \ rule $id="NeverPing100WithNode11-rule" -inf: #uname eq 11sch location NeverPing101WithNode10 Ping101 \ rule $id="NeverPing101WithNode10-rule" -inf: #uname eq 10sch location Website0NeedsConnectivity Website0 \ rule $id="Website0NeedsConnectivity-rule" -inf: not_defined pingd or pingd lte 0 location Website1NeedsConnectivity Website1 \ rule $id="Website1NeedsConnectivity-rule" -inf: not_defined pingd or pingd lte 0 colocation Never -inf: LanCluster101 LanCluster100 colocation Never2 -inf: WanCluster100 LanCluster101 colocation NeverBothWebsitesTogether -inf: Website0 Website1 property $id="cib-bootstrap-options" \ dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \ cluster-infrastructure="openais" \ expected-quorum-votes="2" \ no-quorum-policy="ignore" \ stonith-enabled="false" \ last-lrm-refresh="1408954702" \ maintenance-mode="false" rsc_defaults $id="rsc-options" \ resource-stickiness="100" \ migration-threshold="3" I also have a less important question concerning this line: colocation NeverBothLans -inf: LanCluster101 LanCluster100 How do I tell it that this collocation only applies to '11node'.

    Read the article

  • Dual Monitors in Ubuntu

    - by Luminose
    My only issue that is stopping me from moving to Linux is the dual monitor support. If I use TwinView, maximizing an application causes it to take over both monitors, not maximize in the current monitor the way it works in Windows. If I use two separate X windows, certain programs default to a specific monitor with no way of moving it to the other desktop. Has anyone else had these issues? Are there any detailed dual monitor resources.for Linux/Ubuntu I can read?

    Read the article

  • Dual Monitors Switching While Running Full-Screen Game?

    - by Phil Sandler
    I have a dual-monitor setup, and currently I can run a full screen game (Warcraft 3 and Starcraft 2 currently) and see still see anything I have open on my second monitor. I can't move my mouse from one monitor to the other, which is a good thing. However, I would like to know if it's possible to press some key combination and "release" my mouse control from the full screen game to my second monitor without the game minimizing (which is what happens when I ALT - Tab).

    Read the article

  • Help with force close occurrences in my app

    - by Ken
    This is the last issue with this app. Periodic force close situations. I think something should be on another thread but I'm not sure what. Anyway, I can always count on a freeze on first install. If I wait, eventually (maybe 10 seconds) the app comes around, maybe more. here is an excerpt from logcat--the three lines occur after full layout is displayed and I attempt to touch a [game] 'peg' which should spawn a sprite, but the freeze occurs there. Can anybody tell what the issue might be?: I/System.out( 279): TouchDown (17.0,106.0) I/System.out( 279): checking (17,106 I/System.out( 279): hit for bounds Rect(3, 98 - 32, 130) [FREEZE BEGINS] W/webcore ( 279): Can't get the viewWidth after the first layout W/WindowManager( 60): Key dispatching timed out sending to com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree W/WindowManager( 60): Previous dispatch state: null W/WindowManager( 60): Current dispatch state: {{null to Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} @ 1295232880017 lw=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false} lb=android.os.BinderProxy@440523b8 fin=false gfw=true ed=true tts=0 wf=false fp=false mcf=Window{43fd87a0 com.live.brainbuilderfree/com.live.brainbuilderfree.BrainBuilderFree paused=false}}} I/Process ( 60): Sending signal. PID: 279 SIG: 3 I/dalvikvm( 279): threadid=3: reacting to signal 3 D/dalvikvm( 124): GC_EXPLICIT freed 1754 objects / 106104 bytes in 7365ms I/Process ( 60): Sending signal. PID: 60 SIG: 3 I/dalvikvm( 60): threadid=3: reacting to signal 3 I/dalvikvm( 60): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 263 SIG: 3 I/dalvikvm( 263): threadid=3: reacting to signal 3 I/dalvikvm( 279): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 117 SIG: 3 I/dalvikvm( 117): threadid=3: reacting to signal 3 I/dalvikvm( 117): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 254 SIG: 3 I/Process ( 60): Sending signal. PID: 121 SIG: 3 I/dalvikvm( 121): threadid=3: reacting to signal 3 D/AudioSink( 34): bufferCount (4) is too small and increased to 12 I/System.out( 279): making white sprite I/Process ( 60): Sending signal. PID: 186 SIG: 3 I/Process ( 60): Sending signal. PID: 232 SIG: 3 D/MillennialMediaAdSDK( 279): size: 1 D/MillennialMediaAdSDK( 279): num: 1 D/AdWhirl SDK( 279): Millennial success D/AdWhirl SDK( 279): Will call rotateAd() in 120 seconds I/dalvikvm( 232): threadid=3: reacting to signal 3 I/dalvikvm( 121): Wrote stack traces to '/data/anr/traces.txt' I/Process ( 60): Sending signal. PID: 222 SIG: 3 I/MillennialMediaAdSDK( 279): Millennial ad return success D/MillennialMediaAdSDK( 279): View height: 0 D/MillennialMediaAdSDK( 279): nextUrl: [deleted] I/Process ( 60): Sending signal. PID: 239 SIG: 3 I/Process ( 60): Sending signal. PID: 213 SIG: 3 D/AdWhirl SDK( 279): Added subview D/AdWhirl SDK( 279): Pinging URL: [deleted] I/Process ( 60): Sending signal. PID: 197 SIG: 3 I/dalvikvm( 197): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 164 SIG: 3 I/dalvikvm( 164): threadid=3: reacting to signal 3 D/dalvikvm( 279): GC_FOR_MALLOC freed 7735 objects / 639688 bytes in 217ms I/Process ( 60): Sending signal. PID: 124 SIG: 3 I/dalvikvm( 124): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 158 SIG: 3 I/dalvikvm( 158): threadid=3: reacting to signal 3 I/Process ( 60): Sending signal. PID: 127 SIG: 3 E/ActivityManager( 60): ANR in com.live.brainbuilderfree (com.live.brainbuilderfree/.BrainBuilderFree) E/ActivityManager( 60): Reason: keyDispatchingTimedOut E/ActivityManager( 60): Load: 3.46 / 1.69 / 0.65 E/ActivityManager( 60): CPU usage from 28095ms to 140ms ago: E/ActivityManager( 60): system_server: 30% = 25% user + 4% kernel / faults: 3119 minor 66 major E/ActivityManager( 60): mediaserver: 11% = 7% user + 4% kernel / faults: 746 minor 17 major E/ActivityManager( 60): com.svox.pico: 1% = 0% user + 1% kernel / faults: 2833 minor 8 major E/ActivityManager( 60): d.process.acore: 1% = 0% user + 0% kernel / faults: 1146 minor 36 major E/ActivityManager( 60): ndroid.launcher: 1% = 0% user + 0% kernel / faults: 852 minor 6 major E/ActivityManager( 60): m.android.phone: 0% = 0% user + 0% kernel / faults: 621 minor 7 major E/ActivityManager( 60): kswapd0: 0% = 0% user + 0% kernel E/ActivityManager( 60): ronsoft.openwnn: 0% = 0% user + 0% kernel / faults: 337 minor 2 major E/ActivityManager( 60): adbd: 0% = 0% user + 0% kernel / faults: 3 minor E/ActivityManager( 60): zygote: 0% = 0% user + 0% kernel / faults: 169 minor E/ActivityManager( 60): events/0: 0% = 0% user + 0% kernel E/ActivityManager( 60): rild: 0% = 0% user + 0% kernel / faults: 103 minor 3 major E/ActivityManager( 60): pdflush: 0% = 0% user + 0% kernel E/ActivityManager( 60): .quicksearchbox: 0% = 0% user + 0% kernel / faults: 61 minor E/ActivityManager( 60): id.defcontainer: 0% = 0% user + 0% kernel / faults: 12 minor E/ActivityManager( 60): +rainbuilderfree: 0% = 0% user + 0% kernel E/ActivityManager( 60): +sh: 0% = 0% user + 0% kernel E/ActivityManager( 60): +app_process: 0% = 0% user + 0% kernel E/ActivityManager( 60): TOTAL: 100% = 76% user + 21% kernel + 2% iowait + 0% irq + 0% softirq I/dalvikvm( 127): threadid=3: reacting to signal 3 I/dalvikvm( 186): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 3747 objects / 228920 bytes in 609ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.759MB for 36896-byte allocation I/dalvikvm( 239): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 226 objects / 9952 bytes in 546ms I/dalvikvm( 213): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 5816 bytes in 492ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.815MB for 49188-byte allocation I/dalvikvm( 222): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 77 objects / 5232 bytes in 546ms I/dalvikvm( 254): threadid=3: reacting to signal 3 D/dalvikvm( 60): GC_FOR_MALLOC freed 105 objects / 55856 bytes in 521ms I/dalvikvm-heap( 60): Grow heap (frag case) to 4.876MB for 98360-byte allocation D/dalvikvm( 60): GC_FOR_MALLOC freed 58 objects / 3632 bytes in 340ms D/dalvikvm( 60): GC_FOR_MALLOC freed 1093 objects / 185256 bytes in 572ms W/WindowManager( 60): Continuing to wait for key to be dispatched I/System.out( 279): TouchMove (117.0,124.0) I/System.out( 279): TouchUP (117.0,124.0) D/dalvikvm( 60): GC_FOR_MALLOC freed 141 objects / 108328 bytes in 564ms I/ARMAssembler( 60): generated scanline__00000077:03515104_00000000_00000000 [ 33 ipp] (47 ins) at [0x313d78:0x313e34] in 11621593 ns W/InputManagerService( 60): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@43f66a10 I/dalvikvm( 239): Wrote stack traces to '/data/anr/traces.txt' I/dalvikvm( 263): Wrote stack traces to '/data/anr/traces.txt' etc...

    Read the article

  • How do I view the job queue in lftp after it has moved to a background process?

    - by drpfenderson
    I've just started using lftp for remote transferring files on my Raspberry Pi running Debian. I know how to transfer the files, and use queue and jobs to add and view transferring files. However, I'm not actually sure on how to view these transfers once lftp moves to the background. The lftp man page mentions how lftp is moving to the background, but when I open a new instance of the program from shell and type jobs, the queue is empty. However, I can clearly see using my file manager that the transfers are still happening, as the files are there and growing in size. I'm guessing that when I reopen lftp, it's just opening a new instance that isn't connected to the nohup mode lftp that has the active queue. I've tried searching various places, but no one else seems to have this particular issue. So, I guess what I'm asking is twofold: Is there a way to easily attach to the background lftp process to view the current jobs list? If not, is there a way to view this at all?

    Read the article

  • Syntax for file and process exclusions in Forefront Endpoint Protection?

    - by Massimo
    I can't seem to find an official and up-to-date documentation on how to set up file and process exclusions in Forefront Endpoint Protection 2012. For file types, which of these will work? Are they the same? ext .ext *.ext What about wildcards? .e?t .e* .*t For file paths, which wildcards are allowed and how do they work? C:\path* C:\path\s*e C:\path\somef?le C:\*\somefile C:\pa*\somefile C:\pa?h\somefile *\path *:\path For processes, can wildcard be used when specifying the file name? Same syntax as file paths? Also: I read in this post that, as of October 2009, Real Time Protection ignored wildcards; is this still true for the 2012 version?

    Read the article

  • How can I poll different aws sqs in the same process?

    - by Luccas
    What is the right way to poll from differents AWS SQS in the same process? Suppose I have a ruby script: listen_queues.rb and run it. Should I need to create threads to wrap each SQS poll or start sub processes? t1 = Thread.new do queue1.poll do |msg| .... end t2 = Thread.new do queue2.poll do |msg| .... end t2.join I tried this code, but the poll is not receiving any of the messages available. When I run only one of them (t1 or t2), it works. But I need the 2 running. What is going on? Thanks!!

    Read the article

  • How can I determine a cmd.exe's parent process.

    - by René Nyffenegger
    Sometimes I find myself in a cmd.exe environment that itself was started by another cmd.exe or by another console-based application. Now, working in such an environment, I'd like to know what happens if I type exit, that is, if the cmd.exe window will disappear, or if it goes back to the creating cmd.exe or application. This, of course, because sometimes as I work in cmd.exe I am forgetful about how I called it. So, is there a way to find out the parent process (if this is the correct terminus for what I mean) of a cmd.exe within cmd.exe?

    Read the article

  • How to stop a infinite running process(ztail) started by a ssh session after that session is closed

    - by Sanath Adiga
    I have a peculiar problem. My server supports multiple ssh session simultaneously, so that multiple admins can manage it simultaneously. We have a command which calls ztail to show the compressed log files and when the current ssh session is closed (without pressing ctrlc, to stop the tail command), the command should ideally stop working. But what I observed when I start a new ssh session is that the process ztail is still running in the background and consuming CPU, even though the previous session was closed. How can I determine when the session is closed, so that I can use that variable/flag to close/stop any commands initiated by that previously closed session?

    Read the article

  • Can iptables allow Squid to process a request, then redirect the response packets to another port?

    - by Dan H
    I'm trying to test a fancy traffic analyzer app, which I have running on port 8890. My current plan is to let any HTTP request come into Squid, on port 3128, and let it process the request, and then just before it sends the response back, use iptables to redirect the response packets (leaving port 3128) to port 8890. I've researched this all night, and tried many iptables commands, but I'm missing something and my hair is falling out. I thought something like this would work: iptables -t nat -A OUTPUT -p tcp --sport 3128 -j REDIRECT --to-ports 8990 This rule gets created ok, but it never redirects anything. Is this even possible? If so, what iptables incantation could do it? If not, any idea what might work on a single host, given multiple remote browser clients?

    Read the article

  • Is there a maximum of open files per process in Linux?

    - by Malax
    My question is pretty simple and is actually stated in the title. One of my applications throws errors regarding "too many open files" at me, even tho the limit for the user the application runs with is higher than the default of 1024 (lsof -u $USER reports 3000 open fds). Because I cannot imagine why this happens, I guess there might be a maximum per process. Any idea is very appreciated! Edit: Some values that might help... root@Debian-60-squeeze-64-minimal ~ # ulimit -n 100000 root@Debian-60-squeeze-64-minimal ~ # tail -n 4 /etc/security/limits.conf myapp soft nofile 100000 myapp hard nofile 1000000 root soft nofile 100000 root hard nofile 1000000 root@Debian-60-squeeze-64-minimal ~ # lsof -n -u myapp | wc -l 2708

    Read the article

  • Why does the Java VM process eat up more RAM then specified in -Xmx parameter?

    - by evilpenguin
    I have multiple servers running CentOS 5.4 and only one application running on Java VM. I've configured the Java VM with the following arguments: java -Xmx4500M -server -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:NewSize=1024m -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote=true The machines I'm running the VM on has 6 GB RAM and no other applications running. After a while, the java process starts to hit the swap space really hard, I get this info out of the top command: 7658 root 25 0 11.7g 3.9g 4796 S 39.4 67.3 543:54.17 java On the other hand, if I connect via JConsole, it reports the Java VM has 2.6 GB used, 4.6 GB commited and 4.6 Gb max. java -version returns: java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Why is the Java VM expanding so much past it's allocated heap size? And where does that memory go, if it's not reported in JConsole?

    Read the article

  • How to automate kinit process to obtain TGT for Kerberos?

    - by tore-
    I'm currently writing a puppet module to automate the process of joining RHEL servers to an AD domain, with support for Kerberos. Currently I have problems with automatically obtain and cache Kerberos ticket-granting ticket via 'kinit'. If this were to be done manually, I would do this: kinit [email protected] This prompts for the AD user password, hence there is a problem with automate this. How can i automate this? I've found some posts mentioning using kadmin to create a database with the ad users password in it, but I've had no luck. Thanks for input

    Read the article

  • Web and email host migration - Limitations and suggestions to make the process as easy as possible.

    - by Jack Hickerson
    I developed a website for a friend of mine to replace his current 'all inclusive' provider (website creation, updating, web hosting, email hosting). I've already paid for a hosting service which currently houses the website which I have created. I need to cancel the previous service provider to get the domain migrated to the new host, however I will still need to transfer or recreate all of the email addresses that everyone in his company had previously. Is there an easy way migrate email accounts (still linked to the same domain) while migrating to a different host? Will any methods allow all users to retain their archived emails and folder structures? What is the process to do so. Because the current provider is a rather large website development and hosting company, I will have limited access to the data they have stored. As you can probably tell, my knowledge in this area is very limited - any/all suggestions you may have would be greatly appreciated. Thanks in advance. -Jack

    Read the article

  • How to increase max FD limit for a daemon process running under a headless user?

    - by Ameliorator
    To increase the FD limit for a daemon process running under a headless user on a Ubuntu Linux machine we did following changes in /etc/security/limits.conf soft nofile 10000 hard nofile 10000 We also added session required pam_limits.so in /etc/pam.d/login. The changes got reflected for all the users who logged out and logged in again. Whatever new processes are starting under those users are getting new FD limits. But for the daemon which is running under a headless user the changes are not getting reflected. what is the way by which the changes can be reflected for the daemon which is running under headless user ?

    Read the article

  • How do I kill a process that keeps respawning?

    - by terabytest
    I got infected by a virus. It looks like I removed it, but it somehow injected a few more processes (I can see them in the task manager) that respawn when I kill them (somehow). Is there a way to destroy those process to stop them from respawning, or in the case something else is respawning them, to kill that "something"? I really don't want to format my pc for this. The data in it is very important for me (personal value) so I'd really want to know a way to do this without reinstalling my OS. I'm on Windows Vista 32 bits.

    Read the article

  • Can a GPO Startup Script starts a background process and exit immediately?

    - by pepoluan
    I have Googled, and not yet found an answer. Scenario: One of my GPOs have a Startup Script that takes a long time to finish. For some reasons, we have to run the scripts synchronously. Naturally, this causes slow startup time (sometimes as long as 15 minutes!) before the Logon screen appears. After profiling and analyzing the perpetrator script, I conclusively determined that the step where it's taking a long time to finish will not affect the result of the succesive GPOs. In other words, that particular step (and all steps afterwards) can run in the background. My Question: Is it possible for the Startup Script to just 'trigger' another script/program that will run to completion even when the Startup Script exits? That is, the "child processes" of the Startup Script continues to live even when the Startup Script's process ends? Additional Info: The Domain Controllers are 2008 and 2008 R2's. The workstations are Windows XP.

    Read the article

  • The good SQL database to process a lot of data?

    - by Dorian
    I have to process like 10-100 millions records. I have to give the data to the client when it's finish. The data is givent as SQL requests to execute in the database. He have a powerful server with MySQL, I think it will be fast enough. The issue is my computer is not as powerful as his server, so I would like to use an other SQL server who is compatible (I export his database and import it in my computer) with MySQL but more powerful. What should I use? Or am I doomed to use MySQL?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >