Search Results

Search found 68994 results on 2760 pages for 'oracle real application clusters'.

Page 287/2760 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Why do HDFS clusters have only a single NameNode?

    - by grautur
    I'm trying to understand better how Hadoop works, and I'm reading The NameNode is a Single Point of Failure for the HDFS Cluster. HDFS is not currently a High Availability system. When the NameNode goes down, the file system goes offline. There is an optional SecondaryNameNode that can be hosted on a separate machine. It only creates checkpoints of the namespace by merging the edits file into the fsimage file and does not provide any real redundancy. Hadoop 0.21+ has a BackupNameNode that is part of a plan to have an HA name service, but it needs active contributions from the people who want it (i.e. you) to make it Highly Available. from http://wiki.apache.org/hadoop/NameNode So why is the NameNode a single point of failure? What is bad or difficult about having a complete duplicate of the NameNode running as well?

    Read the article

  • I didn't mean to become a database developer, but now I am. Should I stop or try to get better?

    - by pretlow majette
    20 years ago I couldn't afford even a cheap POS program when I opened my first surf shop in the Virgin Islands. I bought a copy of Paradox (remember that?) in 1990 and spent months in a back room scratching out a POS application. Over many iterations, including a switch to Access (2000)/SQL Server (2003), I built a POS and backoffice solution that runs four stores with multiple cash registers, a warehouse and office. Until recently, all my stores were connected to the same LAN (in a small shopping center) and performance wasn't an issue. Now that we've opened a location in the States that's changed. Connecting to my local server via the internet has slowed that locations application to a crawl. This is partly due to the slow and crappy dsl service we have in the Virgin Islands, and partly due to my less-than-professional code and sql. With other far-away stores in the works, I need a better solution. I like my application. My staff knows it well, and I'm not inclined to take on the expense of a proper commercial solution. So where does that leave me? I should probably host my sql online to sidestep the slow dsl here. I think I can handle cleaning up my SQL querries to speed that up a bit. What about Access? My version seems so old, but I don't like the newer versions with the 'ribbon'. There are so many options... Should I be learning Visual Studio with an eye on moving completely to the web? Will my VBA skills help me at all there? I don't have the luxury of a year at the keyboard to figure it out anymore. What about dotnetnuke, sharepoint, or lightswitch? They all seem like possibilities, but even understanding their capabilities is daunting. I'm pretty deep into it, but maybe I should bail and hire a consultant or programmer. That sounds expensive tho, and there's no guarantee there either... Any advice would be greatly appreciated. Or, if anybody is interested in buying a small chain of surf shops...

    Read the article

  • How can I install an old version of libc on 12.04 and is it safe to do so?

    - by mathematician1975
    I am building an application on 12.04 and I need to run it on an embedded device. The device has libc-2.8.90.so on it and my dev machine has libc-2.15.so on it. I would like to install libc-2.8.90 onto my dev machine and attempt to link it to my application. I have searched at the Ubuntu software centre for libc-2.8.90 but I cannot find anything resembling it. Is there a way to install this on my machine from command line?? Also will my system be safe having 2 installed versions of libc at the same time? Can it lead to any instability?

    Read the article

  • Data Source Security Part 4

    - by Steve Felts
    So far, I have covered Client Identity and Oracle Proxy Session features, with WLS or database credentials.  This article will cover one more feature, Identify-based pooling.  Then, there is one more topic to cover - how these options play with transactions.Identity-based Connection Pooling An identity based pool creates a heterogeneous pool of connections.  This allows applications to use a JDBC connection with a specific DBMS credential by pooling physical connections with different DBMS credentials.  The DBMS credential is based on either the WebLogic user mapped to a database user or the database user directly, based on the “use database credentials” setting as described earlier. Using this feature enabled with “use database credentials” enabled seems to be what is proposed in the JDBC standard, basically a heterogeneous pool with users specified by getConnection(user, password). The allocation of connections is more complex if Enable Identity Based Connection Pooling attribute is enabled on the data source.  When an application requests a database connection, the WebLogic Server instance selects an existing physical connection or creates a new physical connection with requested DBMS identity. The following section provides information on how heterogeneous connections are created:1. At connection pool initialization, the physical JDBC connections based on the configured or default “initial capacity” are created with the configured default DBMS credential of the data source.2. An application tries to get a connection from a data source.3a. If “use database credentials” is not enabled, the user specified in getConnection is mapped to a DBMS credential, as described earlier.  If the credential map doesn’t have a matching user, the default DBMS credential is used from the datasource descriptor.3b. If “use database credentials” is enabled, the user and password specified in getConnection are used directly.4. The connection pool is searched for a connection with a matching DBMS credential.5. If a match is found, the connection is reserved and returned to the application.6. If no match is found, a connection is created or reused based on the maximum capacity of the pool: - If the maximum capacity has not been reached, a new connection is created with the DBMS credential, reserved, and returned to the application.- If the pool has reached maximum capacity, based on the least recently used (LRU) algorithm, a physical connection is selected from the pool and destroyed. A new connection is created with the DBMS credential, reserved, and returned to the application. It should be clear that finding a matching connection is more expensive than a homogeneous pool.  Destroying a connection and getting a new one is very expensive.  If you can use a normal homogeneous pool or one of the light-weight options (client identity or an Oracle proxy connection), those should be used instead of identity based pooling. Regardless of how physical connections are created, each physical connection in the pool has its own DBMS credential information maintained by the pool. Once a physical connection is reserved by the pool, it does not change its DBMS credential even if the current thread changes its WebLogic user credential and continues to use the same connection. To configure this feature, select Enable Identity Based Connection Pooling.  See http://docs.oracle.com/cd/E24329_01/apirefs.1211/e24401/taskhelp/jdbc/jdbc_datasources/EnableIdentityBasedConnectionPooling.html  "Enable identity-based connection pooling for a JDBC data source" in Oracle WebLogic Server Administration Console Help. You must make the following changes to use Logging Last Resource (LLR) transaction optimization with Identity-based Pooling to get around the problem that multiple users will be accessing the associated transaction table.- You must configure a custom schema for LLR using a fully qualified LLR table name. All LLR connections will then use the named schema rather than the default schema when accessing the LLR transaction table.  - Use database specific administration tools to grant permission to access the named LLR table to all users that could access this table via a global transaction. By default, the LLR table is created during boot by the user configured for the connection in the data source. In most cases, the database will only allow access to this user and not allow access to mapped users. Connections within Transactions Now that we have covered the behavior of all of these various options, it’s time to discuss the exception to all of the rules.  When you get a connection within a transaction, it is associated with the transaction context on a particular WLS instance. When getting a connection with a data source configured with non-XA LLR or 1PC (using the JTS driver) with global transactions, the first connection obtained within the transaction is returned on subsequent connection requests regardless of the values of username/password specified and independent of the associated proxy user session, if any. The connection must be shared among all users of the connection when using LLR or 1PC. For XA data sources, the first connection obtained within the global transaction is returned on subsequent connection requests within the application server, regardless of the values of username/password specified and independent of the associated proxy user session, if any.  The connection must be shared among all users of the connection within a global transaction within the application server/JVM.

    Read the article

  • BeansBinding Across Modules in a NetBeans Platform Application

    - by Geertjan
    Here's two TopComponents, each in a different NetBeans module. Let's use BeansBinding to synchronize the JTextField in TC2TopComponent with the data published by TC1TopComponent and received in TC2TopComponent by listening to the Lookup. The key to getting to the solution is to have the following in TC2TopComponent, which implements LookupListener: private BindingGroup bindingGroup = null; private AutoBinding binding = null; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && binding != null) { bindingGroup.getBinding("customerNameBinding").unbind(); } if (!result.allInstances().isEmpty()){ Customer c = result.allInstances().iterator().next(); // put the customer into the lookup of this topcomponent, // so that it will remain in the lookup when focus changes // to this topcomponent: ic.set(Collections.singleton(c), null); bindingGroup = new BindingGroup(); binding = Bindings.createAutoBinding( // a two-way binding, i.e., a change in // one will cause a change in the other: AutoBinding.UpdateStrategy.READ_WRITE, // source: c, BeanProperty.create("name"), // target: jTextField1, BeanProperty.create("text"), // binding name: "customerNameBinding"); bindingGroup.addBinding(binding); bindingGroup.bind(); } } I must say that this solution is preferable over what I've been doing prior to getting to this solution: I would get the customer from the resultChanged, set a class-level field to that customer, add a document listener (or action listener, which is invoked when Enter is pressed) on the text field and, when a change is detected, set the new value on the customer. All that is not needed with the above bit of code. Then, in the node, make sure to use canRename, setName, and getDisplayName, so that when the user presses F2 on a node, the display name can be changed. In other words, when the user types something different in the node display name after pressing F2, the underlying customer name is changed, which happens, in the first place, because the customer name is bound to the text field's value, so that the text field's value will also change once enter is pressed on the changed node display name. Also set a PropertyChangeListener on the node (which implies you need to add property change support to the customer object), so that when the customer object changes (which happens, in the second place, via a change in the value of the text field, as defined in the binding defined above), the node display name is updated. In other words, there's still a bit of plumbing you need to include. But less than before and the nasty class-level field for storing the customer in the TC2TopComponent is no longer needed. And a listener on the text field, with a property change listener implented on the TC2TopComponent, isn't needed either. On the other hand, it's more code than I was using before and I've had to include the BeansBinding JAR, which adds a bit of overhead to my application, without much additional functionality over what I was doing originally. I'd lean towards not doing things this way. Seems quite expensive for essentially replacing a listener on a text field and a property change listener implemented on the TC2TopComponent for being notified of changes to the customer so that the text field can be updated. On the other other hand, it's kind of nice that all this listening-related code is centralized in one place now. So, here's a nice improvement over the above. Instead of listening for a customer, listen for a node, from which the customer can be obtained. Then, bind the node display name to the text field's value, so that when the user types in the text field, the node display name is updated. That saves you from having to listen in the node for changes to the customer's name. In addition to that binding, keep the previous binding, because the previous binding connects the customer name to the text field, so that when the customer display name is changed via F2 on the node, the text field will be updated. private BindingGroup bindingGroup = null; private AutoBinding nodeUpdateBinding; private AutoBinding textFieldUpdateBinding; @Override public void resultChanged(LookupEvent le) { if (bindingGroup != null && textFieldUpdateBinding != null) { bindingGroup.getBinding("textFieldUpdateBinding").unbind(); } if (bindingGroup != null && nodeUpdateBinding != null) { bindingGroup.getBinding("nodeUpdateBinding").unbind(); } if (!result.allInstances().isEmpty()) { Node n = result.allInstances().iterator().next(); Customer c = n.getLookup().lookup(Customer.class); ic.set(Collections.singleton(n), null); bindingGroup = new BindingGroup(); nodeUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, n, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "nodeUpdateBinding"); bindingGroup.addBinding(nodeUpdateBinding); textFieldUpdateBinding = Bindings.createAutoBinding( AutoBinding.UpdateStrategy.READ_WRITE, c, BeanProperty.create("name"), jTextField1, BeanProperty.create("text"), "textFieldUpdateBinding"); bindingGroup.addBinding(textFieldUpdateBinding); bindingGroup.bind(); } } Now my node has no property change listener, while the customer has no property change support. As in the first bit of code, the text field doesn't have a listener either. All that listening is taken care of by the BeansBinding code.  Thanks to Toni for help with this, though he can't be blamed for anything that is wrong with it, only thanked for anything that is right with it. 

    Read the article

  • ???????: ????? ??????????? Oracle DB 11g R1 ? 2 ??? ????????????? - 29 ?????, IMC

    - by [email protected]
    ??????? ??? ?????? ????????????? ?????? 2 ???? ???????? ? ??? ????. ??? ???? ???????????? ??-??????? ???????????? ? ?????? ????????:??? ???????????? ???????????? ????????????? ? ?????? ???? ?? ???????, ?????????? ???????? ??? ? ???????? ???????????? ??????????? ?????????? ? ???????????? ??????????? ??????????????? ???????, ????????? ? ???????? ??? ???? ??????????, ???????? ??? ? ?????????? ??????? ????????????????? ?????????????? ???????? ?????????????? ??????? ? ????????? ??????, ?????- ????? ??? ???? ??????????????????, ?????????? ? ??????????????? ??? ????????? ??????????? ??????- ????? ? ??????????????? ?????????? ???????? ? ??????????? ?????? ?????? ????????????????? ????????Oracle Database 11g ???????? ????????? ??? ????????, ???????? ??????? ????? ????? ??????? ? ??????????????????????????? ? ??? ?? ??????? ? ??????????????? ????? ??? ????????-?????????????!

    Read the article

  • How to layout dll or com when develop windows application with .net?

    - by ziang
    Is there any resources that how to create windows application especially for how to design the dll to wrap the api call or the similar? It seems that people don't compile the entire project into a single exe for release and what is the best practice to architect the windows application component based on MVC pattern? Is there any resources or any good book on this topic? Thanks!

    Read the article

  • How a war file gets deployed in any application server?

    - by gurukulki
    I know that when a war file is created of my web application, i have to deploy it, that is if i am using JBoss i have to copy it to deploy folder and if using WAS i have to install it. But i want to know, When i start the server from where the server starts deploying my application. that is which is the entry point to start loading my classes, properties ,DB connections etc.. Thanks.

    Read the article

  • Which MS technologies would be suited for a data intensive application?

    - by steve.tse
    I'm a junior VB.net developer with little application design knowledge. I've been reading a lot of material online regarding different design patterns, frameworks, and methodologies. It's become a bit confusing for me. Right now I'm trying to decide on what language would be best suited to convert an existing VB6 application (with SQL server backend.) I need to update the UI and add more user functionality and reporting capabilities. Initially I was thinking of using WPF and attempting the MVVM model for this big project. Reports would be generated from SSRS. A peer suggested using ASP.net and I don't have enough experience to determine what would be better. The senior programmers here are stuck on using VB6 and don't have any input on what to use. They are encouraging me to use the latest technologies. This application would be for ~20 users in a central location. Ideally I would stick to a Microsoft .net language. Current interface is similar to a datagrid table where the user would click in to see the detail of each record. They would need to have multiple records open at any given time. I look forward to all the advice I can get. EDIT 2010/04/22 2:47 PM EST What is your audience? Internal clients within an intranet How complex are the interactions you expect to implement? not very... displaying data from SQL server to UI. Allow user updates to said data. Typically just one user modifying a record. Do you require near real-time data updates? no How often do you expect to update the application after the first release? twice/year Do you expect a well-defined set of client platforms? Yes, windows xp environment, potentially upgrading to Win7. Currently in IE.6 moving to IE7 or 8 within a couple of months. Do users need access from anywhere? No, just from their PC.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • jboss 5.0 data source configuration in ear file. How can I run oracle 10g and 11g on the same serve

    - by Joe
    Currently my setup is: in my ear META-INF/jboss-app.xml <jboss-app> <service>datasource-ds.xml</service> </module> and datasource-ds.xml <datasources> <local-tx-datasource> <jndi-name>jdbc/mydeployment</jndi-name> <connection-url>jdbc:oracle:thin:@eir:myport:mydbname</connection-url> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <user-name>myuser</user-name> <password>mypassword</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name> <metadata> <type-mapping>Oracle9i</type-mapping> </metadata> </local-tx-datasource> </datasources> and it works when ojdbc5.jar is in my servername/lib How can I config my oracle driver information in my .ear file so that I can have two different ear deployments, one using oracle 10g and one using oracle 11g?

    Read the article

  • ReportBuilder.application fails on my PC - but works on localhost

    - by JayTee
    We're running SQL 2005 on Win2K3 server and are using SSRS. Here's the situation: I can run Report Builder from localhost My coworker can run Report Builder on his Vista computer Another coworker can run Report Builder on his XP SP3 computer (IE7) I can NOT run Report Builder on my XP SP3 computer (IE7) I'm told that it could be anything from an errant registry entry to a group policy problem. Here is what I've tried: Put the site into "Trusted Sites" with "low" security re-install .NET create a new local user account and attempt to run it The results? Every single time, I get a dialog box: "Application cannot be started. Contact the application vendor" I click the details button and get this: PLATFORM VERSION INFO Windows : 5.1.2600.196608 (Win32NT) Common Language Runtime : 2.0.50727.3607 System.Deployment.dll : 2.0.50727.3053 (netfxsp.050727-3000) mscorwks.dll : 2.0.50727.3607 (GDR.050727-3600) dfdll.dll : 2.0.50727.3053 (netfxsp.050727-3000) dfshim.dll : 2.0.50727.3053 (netfxsp.050727-3000) SOURCES Deployment url : http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application Server : Microsoft-IIS/6.0 X-Powered-By : ASP.NET X-AspNet-Version: 2.0.50727 IDENTITIES Deployment Identity : ReportBuilder.application, Version=9.0.3042.0, Culture=neutral, PublicKeyToken=c3bce3770c238a49, processorArchitecture=msil APPLICATION SUMMARY * Online only application. * Trust url parameter is set. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Activation of http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application resulted in exception. Following failure messages were detected: + Value does not fall within the expected range. COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS There were no warnings during this operation. OPERATION PROGRESS STATUS * [4/7/2010 2:53:57 PM] : Activation of http://www.example.com/ReportServer/ReportBuilder/ReportBuilder.application has started. * [4/7/2010 2:53:58 PM] : Processing of deployment manifest has successfully completed. ERROR DETAILS Following errors were detected during this operation. * [4/7/2010 2:53:58 PM] System.ArgumentException - Value does not fall within the expected range. - Source: System.Deployment - Stack trace: at System.Deployment.Application.NativeMethods.CorLaunchApplication(UInt32 hostType, String applicationFullName, Int32 manifestPathsCount, String[] manifestPaths, Int32 activationDataCount, String[] activationData, PROCESS_INFORMATION processInformation) at System.Deployment.Application.ComponentStore.ActivateApplication(DefinitionAppId appId, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.SubscriptionStore.ActivateApplication(DefinitionAppId appId, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.ApplicationActivator.Activate(DefinitionAppId appId, AssemblyManifest appManifest, String activationParameter, Boolean useActivationParameter) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) COMPONENT STORE TRANSACTION DETAILS * Transaction at [4/7/2010 2:53:58 PM] + System.Deployment.Internal.Isolation.StoreOperationSetDeploymentMetadata - Status: Set - HRESULT: 0x0 + System.Deployment.Internal.Isolation.StoreTransactionOperationType (27) - HRESULT: 0x0 I'm really at a loss. I'm certain there is something on my PC preventing the application from running - but I just don't know what. Google hasn't been much of a help because most problems are related to the server configuration (which I know is correct since it works on other PCs) Help me, Overflow Kenobi, you're my only hope..

    Read the article

  • Check if a file is real or a symbolic link

    - by mattdwen
    Is there a way to tell using C# if a file is real or a symbolic link? I've dug through the MSDN W32 docs (http://msdn.microsoft.com/en-us/library/aa364232(VS.85).aspx), and can't find anything for checking this. I'm using CreateSymbolicLink from here, and it's working fine.

    Read the article

  • Convert byte array from Oracle RAW to System.Guid?

    - by Cory McCarty
    My app interacts with both Oracle and SQL Server databases using a custom data access layer written in ADO.NET using DataReaders. Right now I'm having a problem with the conversion between GUIDs (which we use for primary keys) and the Oracle RAW datatype. Inserts into oracle are fine (I just use the ToByteArray() method on System.Guid). The problem is converting back to System.Guid when I load records from the database. Currently, I'm using the byte array I get from ADO.NET to pass into the constructor for System.Guid. This appears to be working, but the Guids that appear in the database do not correspond to the Guids I'm generating in this manner. I can't change the database schema or the query (since it's reused for SQL Server). I need code to convert the byte array from Oracle into the correct Guid.

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >