Search Results

Search found 68994 results on 2760 pages for 'oracle real application clusters'.

Page 335/2760 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Application stuck in TCP retransmit

    - by SandeepJ
    I am running Linux kernel 3.13 (Ubuntu 14.04) on two Virtual Machines each of which operates inside two different servers running ESXi 5.1. There is a zeromq client-server application running between the two VMs. After running for about 10-30 minutes, this application consistently hangs due to inability to retransmit a lost packet. When I run the same setup over Ubuntu 12.04 (Linux 3.11), the application never fails If you notice below, "ss" (socket statistics) shows 1 packet lost, sk_wmem_queued of 14110 (i.e. w14110) and a high rto (120000). State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 12350 192.168.2.122:41808 192.168.2.172:55550 timer:(on,16sec,10) uid:1000 ino:35042 sk:ffff880035bcb100 <- skmem:(r0,rb648720,t0,tb1164800,f2274,w14110,o0,bl0) ts sack cubic wscale:7,7 rto:120000 rtt:7.5/3 ato:40 mss:8948 cwnd:1 ssthresh:21 send 9.5Mbps unacked:1 retrans:1/10 lost:1 rcv_rtt:1476 rcv_space:37621 Since this has happened so consistently, I was able to capture the TCP log in wireshark. I found that the packet which is lost does get retransmitted and even acknowledged by the TCP in the other OS (the sequence number is seen in the ACK), but the sender doesn't seem to understand this ACK and continues retransmitting. MTU is 9000 on both virtual machines and througout the route. The packets being sent are large in size. As I said earlier, this does not happen on Ubuntu 12.04 (kernel 3.11). So I did a diff on the TCP config options (seen via "sysctl -a |grep tcp ") between 14.04 and 12.04 and found the following differences. I also noticed that net.ipv4.tcp_mtu_probing=0 in both configurations. Left side is 3.11, right side is 3.13 <<net.ipv4.tcp_abc = 0 <<net.ipv4.tcp_cookie_size = 0 <<net.ipv4.tcp_dma_copybreak = 4096 14c11 << net.ipv4.tcp_early_retrans = 2 --- >> net.ipv4.tcp_early_retrans = 3 17c14 << net.ipv4.tcp_fastopen = 0 >> net.ipv4.tcp_fastopen = 1 20d16 << net.ipv4.tcp_frto_response = 0 26,27c22 << net.ipv4.tcp_max_orphans = 16384 << net.ipv4.tcp_max_ssthresh = 0 >> net.ipv4.tcp_max_orphans = 4096 29,30c24,25 << net.ipv4.tcp_max_tw_buckets = 16384 << net.ipv4.tcp_mem = 94377 125837 188754 >> net.ipv4.tcp_max_tw_buckets = 4096 >> net.ipv4.tcp_mem = 23352 31138 46704 34a30 >> net.ipv4.tcp_notsent_lowat = -1 My question to the networking experts on this forum : Are there any other debugging tools or options I can install/enable to dig further into why this TCP retransmit failure is occurring so consistently ? Are there any configuration changes which might account for this weird behaviour.

    Read the article

  • PMDB Block Size Choice

    - by Brian Diehl
    Choosing a block size for the P6 PMDB database is not a difficult task. In fact, taking the default of 8k is going to be just fine. Block size is one of those things that is always hotly debated. Everyone has their personal preference and can sight plenty of good reasons for their choice. To add to the confusion, Oracle supports multiple block sizes withing the same instance. So how to decide and what is the justification? Like most OLTP systems, Oracle Primavera P6 has a wide variety of data. A typical table's average row size may be less than 50 bytes or upwards of 500 bytes. There are also several tables with BLOB types but the LOB data tends not to be very large. It is likely that no single block size would be perfect for every table. So how to choose? My preference is for the 8k (8192 bytes) block size. It is a good compromise that is not too small for the wider rows, yet not to big for the thin rows. It is also important to remember that database blocks are the smallest unit of change and caching. I prefer to have more, individual "working units" in my database. For an instance with 4gb of buffer cache, an 8k block will provide 524,288 blocks of cache. The following SQL*Plus script returns the average, median, min, and max rows per block. column "AVG(CNT)" format 999.99 set verify off select avg(cnt), median(cnt), min(cnt), max(cnt), count(*) from ( select dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) cnt from &tab group by dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) ) Running this for the TASK table, I get this result on a database with an 8k block size. Each activity, on average, has about 19 rows per block. Enter value for tab: task AVG(CNT) MEDIAN(CNT) MIN(CNT) MAX(CNT) COUNT(*) -------- ----------- ---------- ---------- ---------- 18.72 19 3 28 415917 I recommend an 8k block size for the P6 transactional database. All of our internal performance and scalability test are done with this block size. This does not mean that other block sizes will not work. Instead, like many other parameters, this is the safest choice.

    Read the article

  • IIS6 - Classic ASP - "out of memory"/"out of string space"

    - by glaucon
    We have a classic ASP application that's under significantly more load than usual. We are from time to time been getting "out of memory" and "out of string space" in the httperr. We do not usually see these errors. For the moment we cannot change the application. Is there anything we can do to the IIS config which will help to reduce or stop these errors occurring ? The application pool is set to default values currently.

    Read the article

  • What are the difference between Cygwin on windows and real UNIX environment

    - by Tarun
    Hi, I am a C/C++ developer. I have never done C++ programming on UNIX, I have done only on windows. I want to practice C++ on Unix. (Because all big companies ask C++ with Unix). I have a laptop on which i do not want to install any other OS (because i have installed very important software on it and i don't have setups) So, I searched and found CygWin which is Unix emulator for Windows. I am thinking to practice C++ on this. Please help me, how can I practice/learn in more close to the environment(Unix Environment) that is used in Big companies like IBM. What will be the difference between Unix and Cygwin?

    Read the article

  • How to determine which version of Oracle Client is being used from the server.

    - by Robert Love
    Using Oracle 10g. ( 10.2.0.4 ) Possibly by looking at either logs or system tables is there a way to determine which version of the oracle client each connection is using. Our systems initially had 8.1.7 Clients, and then 9.X clients. We attempted to manually locate all machines that had older clients and upgrade them to to 10.2 Clients. We are seeking a method to audit (from the server) if we were successful in upgrading all of our client machines.

    Read the article

  • What is a Web Application?

    A wide availability of the Internet makes web applications very popular. They are real programs or web programs for online use. Web applications are designed to run on a website, using a browser as a client.

    Read the article

  • Oracle Subscribes To The Big Data Journal: So Can You!

    - by Roxana Babiciu
    Oracle Product Development has funded access to the Big Data Journal for all Oracle employees. Big Data is a highly innovative, open-access, peer-reviewed journal of world-class research, exploring the challenges and opportunities in collecting, analyzing, and disseminating vast amounts of data. This includes data science, big data infrastructure and analytics, and pervasive computing. Register here to receive Big Data articles online or sign up for the table of content alert or the RSS feed.

    Read the article

  • Exadata X3 launch webcast - Available on-demand

    - by Javier Puerta
    Available on-demand, this webcast covers everything partners need to know about Oracle’s next-generation database machine. You will learn how to improve performance by storing multiple databases in memory, lower power and cooling costs by 30%, and easily deploy a cloud based database service. Exadata X3 combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. View here!

    Read the article

  • Application window as polygon texture?

    - by nekome
    Is there a way, or method, to have some application rendered as texture in 3D scene on some polygon, and also have full interactivity with it? I'm talking about Windows platform, and maybe OpenGL but I guess it doesn't matter is it OGL or DX. For example: I run Calculator using WINAPI functions (preferably hidden, not showing on desktop) and I want to render it inside 3D scene on some polygon but still be able to type or click buttons and have it respond. My idea to realize this is to have WINAPI take screenshot (or render it to memory if possible) of that Calculator and pass it to OpenGL as texture for each frame (I'm experimenting with SDL through pygame) and for mouse interactivity to use coordination translation and calculate where on application window it would act, and then use WINAPI functions such as SetCursorPos to set cursor ant others to simulate click or something else. I haven't found any tutorials with topic similar to this one. Am I on a right track? Is there better way to do this if possible at all?

    Read the article

  • Accessing Application Scoped Bean Causes NullPointerException

    - by user2946861
    What is an Application Scoped Bean? I understand it to be a bean which will exist for the life of the application, but that doesn't appear to be the correct interpretation. I have an application which creates an application scoped bean on startup (eager=true) and then a session bean that tries to access the application scoped bean's objects (which are also application scoped). But when I try to access the application scoped bean from the session scoped bean, I get a null pointer exception. Here's excerpts from my code: Application Scoped Bean: @ManagedBean(eager=true) @ApplicationScoped public class Bean1 implements Serializable{ private static final long serialVersionUID = 12345L; protected ArrayList<App> apps; // construct apps so they are available for the session scoped bean // do time consuming stuff... // getters + setters Session Scoped Bean: @ManagedBean @SessionScoped public class Bean2 implements Serializable{ private static final long serialVersionUID = 123L; @Inject private Bean1 bean1; private ArrayList<App> apps = bean1.getApps(); // null pointer exception What appears to be happening is, Bean1 is created, does it's stuff, then is destroyed before Bean2 can access it. I was hoping using application scoped would keep Bean1 around until the container was shutdown, or the application was killed, but this doesn't appear to be the case.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >