Search Results

Search found 41993 results on 1680 pages for 'java db'.

Page 1080/1680 | < Previous Page | 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087  | Next Page >

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • What does this diagnostic output mean?

    - by ChrisF
    I recently had a fault with my broadband connection. It turned out to be a fault with the ISP's or teleco's equipment. My ISP posted this diagnostic, but while I understand it in general, I'd like to to know more about the details. I'm assuming that ATM means Asynchronous Transfer Mode and PPP means Point to Point Protocol. It was this that my router was indicating as the fault. xDSL Status Test Summary Sync Status: Circuit In Sync General Information NTE Status: NTE Power Status: Unknown Bypass Status: Upstream DSL Link Information Downstream DSL Link Information Loop Loss: 9.0 17.0 SNR Margin: 25 15 Errored Seconds: 0 0 HEC Errors: 0 Cell Count: 0 0 Speed: 448 8128 TAM Status: Successfully executed operation Network Test: Sub-Test Results Layer Name Value Status Modem pass Transmitter Power (Upstream) 12.4 dBm Transmitter Power (Downstream) 8.8 dBm Upstream psd -38 dBm/Hz Downstream psd -51 dBm/Hz DSL pass Equipment Vendor Name TSTC Equipment Vendor Id n/a Equipment Vendor Revision n/a Training Time 8 s Num Syncs 1 Upstream bit rate 448 kbps Downstream bit rate 8128 kbps Upstream maximum bit rate 1108 kbps Downstream maximum bit rate 11744 kbps Upstream Attenuation 3.5 dB Downstream Attenuation 0.0 dB Upstream Noise Margin 20.0 dB Downstream Noise Margin 19.0 dB Local CRC Errors 0 Remote CRC Errors 0 Up Data Path interleaved Down Data Path interleaved Standard Used G_DMT INP INP Upstream Symbols n/a INP Upstream Delay 4 ms INP Upstream Depth 4 INP Downstream Symbols n/a INP Downstream Delay 5 ms INP Downstream Depth 32 ATM Reason: No ATM cells received fail Number of cells transmitted 30 Number of cells received 0 number of Near end HEC errors 0 number of Far end HEC errors n/a PPP Reason: No response from peer fail PAP authentication nottested CHAP authentication nottested (I'm not sure that Super User is the best place to ask this, but two people have suggested I ask it here so here I am).

    Read the article

  • Which upgrade path for disk IO bound postgres server?

    - by user41679
    Hi all, We currently have a Sun x4270 with 2xquad core Xeon Nehalmen 2.93ghz cores (16 threads), 72 gig of ram and 16 x 10k SAS disks split between the os raid 1, a partition for the Write Ahead Logs which is raid 10 and a partition for the database tables and indexes which is also raid 10, all xfs. I'm currently evaluating which path to go down in terms of upgrades. We'll be sharding the DB at some point soon, but for now I need to focus on hardware upgrades specifically. The machine is not CPU or memory bound at all at the moment, just IOWait is become an issue. The machine is mostly write access as we have a heavy caching layer. We're seeing about 300 write IOPS average on both the database partitions. We don't have any additional storage infrastructure like a Fiber Channel or ISCSI network. Budget isn't too much of a concern, something inline with the size of this server (i.e no $1m IBM machines) Space is ok on the DB side of things, we're running out obviously but there's also some reduction we can do. Additional space would be good though. My current thoughts are either: * ISCSI SAN, possible with 10Gbit network that has solid state acceleration. * FusionIO card / Sun F20 card (will the FusionIO card work in the Sun box? * DAS shelf (something like this http://www.broadberry.co.uk/das-direct-attached-storage-servers/cyberstore-224s-das) which a combination of 15k sas disks and some Intel X25-E drives for DB indexes etc) what would I need to put in the x4270 to add a DAS shelf? I think it's a SAS HBA card, do I have to use Sun's own card or will any PCI Express card work? Anything else??? what would you guys do from your experience? I appreciate it's a lot of questions, but I haven't expanded a DB machine for a number of years and the landscape has changed dramatically since then! Any advice or feedback would be very much appreciated. Let me know if there's anything else I can clarify. Thanks in advance!

    Read the article

  • Best Practices for adding Exchange Archive to current 3 server setup

    - by ADquestion
    I'm looking to add an Archive Database (which I know is just a Mailbox Database) to our current Exchange 2010 environment. I have done this in the past at a previous job, but we had a simpler setup than at this current job. I've been trying to find some best practices to make sure it's setup in an ideal way, but so far not finding the details I would prefer. Hoping someone on here can give me a few pointers. Currently we have a 3 server setup, Server1, Server2 and Server3. Three databases of course, DB1, DB2 and DB3. We have a DAG setup between them. Server1 has DB1 and DB3 on it, DB1 is not active, DB3 is active. Server2 has DB1 and DB2 on it, both are active. Server3 has DB2 and DB3 on it, both are not active. All three servers are virtual (VMware). Each one is setup identical to the other as follows: C:\ 60GB - OS E:\ 600GB - DB (currently only 90GB used, pointing to Datastore just for Server2) F:\ 200GB - Log (2GB used, pointing to same Datastore as above) G:\ 200GB - Restore (0 used, pointing to same Datastore as above) The drives are all set to Thin Provisioning, and it looks as though I have 600GB of available space. They have not been on Exchange that long and only have about 70GB worth of PSTs to import back in that will be going to the Archive Database, plus anything older than 2 years from their current inbox that will be moved into there. I was considering placing the Archive DB on the E:\ drive of Server3 (only) like the current DB, but wasn't sure if that was acceptable. I don't plan on setting the Archive DB up with the DAG, just plan on having it as a single repository for older emails and manually back it up every now and then. If anyone has any suggestions on this I would appreciate it the input. I've done it on a slightly smaller scale before and it worked well, but like to think it through before pulling the trigger, especially at a new job. :) Thanks again!

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • ADSL2+ - High sync-rate, good line attenuation, but low noise margin and slow speeds

    - by Mark Pim
    I've been with my ISP (IdNet) for a few months and have been getting some good speeds, but in the last week the speed has dramatically decreased (from 15 Mbps+ to around 0.2 Mbps). This happens at all times of day, not just peak periods. Obviously I've done all I can to isolate problems my end - only one PC is connected to the router (via ethernet cable) and no other background programs are using the network etc. I've raised the issue with the ISP and they've suggested trying a new ADSL filter to see if that is casuing the problem, but I thought it would also be good to get the opinion of superuser on possible causes or other troubleshooting I can do. Here are the juicy stats :) My router (Netgear DGN1000) reports: Downstream Upstream Connection Speed 17602 kbps 1062 kbps Line Attenuation 17.9 db 8.6 db Noise Margin 6.0 db 6.1 db I used RouterStats and it seems to show those figures stay fairly consistent all the time I ran the BT speedtest and it reported: download speed of 164 kbps, out of a max achievable of 21000 kbps upload speed of 859 kbps, out of 1048 kbps DSL connection rate 17719 kbps down and 1048 kbps up IP Profile of 15000 kbps Is there any more troubleshooting I can do? Does this look like a problem with my equipment / wiring or with BT's line? Any advice would be great :)

    Read the article

  • Exchange 2010 SP2 database size

    - by Chad
    I have a single Exchange 2010 sp2 environment with 3 DB stores. I am trying to reduce the sizes by moving the mailboxes to a spare DB and then deleting the empty database. I cleaned up the users mailboxes to reduce the sizes and set the retention periods to 1 day each and waited several days before moving mailboxes. The databases are backing up fine and clearing logs files but when I move the mailboxes I noticed they were taking a long time, even though some were less than 100MB. When I checked the new database size it seems like the orginal mailbox size might be moving (1GB instead of 100MB). Exchange is showing the expected smaller mailbox sizes when I run get-mailbox statistics against the DB. So if I have 5 mailboxes 100MB each it is showing like 3GB instead of around 500MB, and no whitespace. I keep waiting thinking mailby the retention period is not expired yet but it is much longer than 1 day already. I am setting them both to 0 today to see if that works. What am I missing to get the combined mailbox sizes to match the DB size minus whitespace?

    Read the article

  • mysqldump --where with = operator doesn't get all rows = - Help!

    - by JonathanLIVE
    I have a situation with a particular table that now thinks it contains 4 Petabytes of data. I know that sounds cool, but I assure you, it is only on a 60GB partition. This table has 9 fields in it. One of them is a domain_id field. It is the best field to identify the rows by, as there are only approximately 6300 of them. The only other field option to match has over 2million records, and thats just more difficult. I cannot do a straight mysqldump because it will attempt to output all 4PB of data and fill the drive long before it gets close to that, so I need to surgically remove the good stuff, destroy the db, and recreate it. I believe if I can do a dump for each domain_id record, then I will get most of the usable data out of it. This is what I am trying to use: mysqldump -u root --skip-opt -q --no-create-info --skip-add-drop-table --max_allowed_packet=1000000000 database table --where="domain_id=10" domains10.sql Using this I expect every row with the domain_id 10 to be exported. However, when I check the export, I am only getting 1 row, when however I look at the db, there are many many rows. It is as though the operator just finds one, then gives up. I have tried various operators. Using the < or I am able to get more of the data, but the export stops short at certain rows where the data has been compromised. With over 6000 to go through, I can't narrow down which rows are being affected in the export easily enough. So, what I need is an operator that will basically do what I thought = would do, simply give me an export of all records that match the specific field. Also note, the only way I got this DB even accessible is through an innodb force recovery 3. So I need to get this right, because after this is done, I have to drop the db in order to make mysql functional again. Looking forward to any helpful answers.

    Read the article

  • BIND9 Forwarding by view

    - by Triztian
    Hi I think this is a simple issue, I'd like to forward only to certain IPs in the LAN network, for example I have 2 acl lists: acl "office1" { 192.168.1.15; // With internet access }; acl "production" { 192.168.1.101; // No internet access }; I know that there probably should be more efficient ways to restrict internet access, but at the moment this is what I'd like to try.Here's what I've tried in named.conf.local // Inlcude my acl definitions include "/etc/bind/acls.conf"; view "no-internet" { match-clients { production; }; include "/etc/bind/named.conf.default-zones"; zone "localdomain.com" { type master; file "/etc/bind/db.localdomain.com"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/db.192.168.1"; }; } view "internet" { match-clients { office1; }; include "/etc/bind/named.conf.default-zones"; forwarders { 201.56.59.14; // Made Up 201.56.59.15; // Made Up }; zone "localdomain.com" { type master; file "/etc/bind/db.localdomain.com"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/db.192.168.1"; }; }; As you can see I want a localdomain.com defined for every computer in my network and forward internet access to the computers in the office but not to the ones on the production floor. I've modified my conf file, however the IP in the "no-internet" acl is able to resolve the domains, even though I've rebooted the computer, flushed the DNS using ipconfig /flushdns and set my DNS Server as the only one, why is this still happening? Thanks in advance.

    Read the article

  • what is best multi-server configuration with OpenVPN

    - by sebut
    We have a number of Database severs running MongoDB on Debian plus a number of Application servers also on Debian. The db servers hold replicating db clusters, so they need to talk to each other. Application servers need to talk to all db servers (for reasons of fault tolerance). The servers are potentially spread across multiple hosting centers, so we need secure channels between all servers. The number of servers is bound to grow, so we need a VPN solution that's easy to maintain and expand. This is why I feel that SSH that we use for testing might not be up to the task and OpenVPN seems the way to go. I have ruled out TAP, since I understand that this would mean all traffic going to all the servers - perhaps this is a misunderstanding and TAP acts more like a switch? With TUN devices I imagine that all DB servers would live in their own separate subnet, they would also need a client configured to be able to connect to each of their peers. The application servers could live in a common subnet range with a client config only. Does this sound like a reasonable setup? Strangely, on the web I did not find anything about multi-server with OpenVPN. Thanks for all insights and ideas!

    Read the article

  • Welcome Stephen Chin and James Weaver to Oracle!

    - by arungupta
    Stephen Chin and James Weaver - the two JavaFX "rockstar" speakers from the community are joining Oracle's Java Evangelist Team. Both of them have co-authored a recently released book - Pro Java FX 2 and are well known for their passion to promote JavaFX. This shows Oracle's continued commitment to Java and JavaFX. Jim blogs at javafxpert.com and can be reached on @JavaFXpert. Steve blogs at and can be reached at steveonjava.com and can be reached at @steveonjava. You'll have an opportunity to meet and engage with them at different community facing activities. Welcome Stephen and James to Oracle!

    Read the article

  • ODI and OBIEE 11g Integration

    - by David Allan
    Here we will see some of the connectivity options to OBIEE 11g using the JDBC driver. You’ll see based upon some connection properties how the physical or presentation layers can be utilized. In the integrators guide for OBIEE 11g you will find a brief statement indicating that there actually is a JDBC driver for OBIEE. In OBIEE 11g its now possible to connect directly to the physical layer, Venkat has an informative post here on this topic. In ODI 11g the Oracle BI technology is shipped with the product along with KMs for reverse engineering, and using OBIEE models for a data source. When you install OBIEE in 11g a light weight demonstration application is preinstalled in the server, when you open this in the BI Administration tool we see the regular 3 panel view within the administration tool. To interrogate this system via JDBC (just like ODI does using the KMs) need a couple of things; the JDBC driver from OBIEE 11g, a java client program and the credentials. In my java client program I want to connect to the OBIEE system, when I connect I can interrogate what the JDBC driver presents for the metadata. The metadata projected via the JDBC connection’s DatabaseMetadata changes depending on whether the property NQ_SESSION.SELECTPHYSICAL is set when the java client connects. Let’s use the sample app to illustrate. I have a java client program here that will print out the tables in the DatabaseMetadata, it will also output the catalog and schema. For example if I execute without any special JDBC properties as follows; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass Then I get the following returned representing the presentation layer, the sample I used is XML, and has no schema; Catalog Schema Table Sample Sales Lite null Base Facts Sample Sales Lite null Calculated Facts …     Sample Targets Lite null Base Facts …     Now if I execute with the only difference being the JDBC property NQ_SESSION.SELECTPHYSICAL with the value Yes, then I see a different set of values representing the physical layer in OBIEE; java -classpath .;%BIHOMEDIR%\clients\bijdbc.jar meta_jdbc oracle.bi.jdbc.AnaJdbcDriver jdbc:oraclebi://localhost:9703/ weblogic mypass NQ_SESSION.SELECTPHYSICAL=Yes The following is returned; Catalog Schema Table Sample App Lite Data null D01 Time Day Grain Sample App Lite Data null F10 Revenue Facts (Order grain) …     System DB (Update me)     …     If this was a database system such as Oracle, the catalog value would be the OBIEE database name and the schema would be the Oracle database schema. Other systems which have real catalog structure such as SQLServer would use its catalog value. Its this ‘Catalog’ and ‘Schema’ value that is important when integration OBIEE with ODI. For the demonstration application in OBIEE 11g, the following illustration shows how the information from OBIEE is related via the JDBC driver through to ODI. In the XML example above, within ODI’s physical schema definition on the right, we leave the schema blank since the XML data source has no schema. When I did this at first, I left the default value that ODI places in the Schema field since which was ‘<Undefined>’ (like image below) but this string is actually used in the RKM so ended up not finding any tables in this schema! Entering an empty string resolved this. Below we see a regular Oracle database example that has the database, schema, physical table structure, and how this is defined in ODI.   Remember back to the physical versus presentation layer usage when we passed the special property, well to do this in ODI, the data server has a panel for properties where you can define key/value pairs. So if you want to select physical objects from the OBIEE server, then you must set this property. An additional changed in ODI 11g is the OBIEE connection pool support, this has been implemented via a ‘Connection Pool’ flex field for the Oracle BI data server. So here you set the connection pool name from the OBIEE system that you specifically want to use and this is used by the Oracle BI to Oracle (DBLINK) LKM, so if you are using this you must set this flex field. Hopefully a useful insight into some of the mechanics of how this hangs together.

    Read the article

  • ADF and Oracle E-Business Suite Integration Series Index

    - by Juan Camilo Ruiz
    I'm creating this entry with the purpose of keeping one page that lists all the past and future entries on the series of integration of ADF with Oracle E-Business Suite, you can access all the articles and reference information that resides in other places too. Also this would the one link that I can reference while presenting about this topic. Here is the list of individual entries from the series: ADF and Oracle E-Business Suite Integration Series: Displaying Read-Only EBS data on ADF ADF and Oracle E-Business Suite Integration Series: Displaying Read-Only EBS data on iPad Using the Oracle E-Business Suite SDK for Java on ADF Applications Securing ADF Applications Using the Oracle E-Business Suite SDK JAAS Implementation Debugging ADF Security in JDeveloper 11g Adding a Role to a Responsibility for Use with the Oracle E-Business Suite SDK for Java JAAS Implementation Embedding ADF UI Components into OAF regions Bonus Material: Webcast Replays Using Oracle ADF with Oracle E-Business Suite: The Full Integration View Best Practices for Using Oracle E-Business Suite SDK for Java with Oracle ADF Documents FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications (Doc ID 1296491.1)

    Read the article

  • Oracle WebLogic Server 12c Launch Event - Dec 1, 2011, 10am PT

    - by arungupta
    Calling all IT managers, architects, and developers, to find out how the new release of Oracle WebLogic Server 12c is: Designed to help you seamlessly move into the public or private cloud with an open, standards-based platform Built to drive higher value for your current infrastructure and significantly reduce development time and cost Optimized to run your solutions for Java Platform, Enterprise Edition (Java EE); Oracle Fusion Middleware; and Oracle Fusion Enhanced with transformational platforms and technologies such as Java EE 6, Oracle’s Active GridLink for RAC, Oracle Traffic Director, and Oracle Virtual Assembly Builder When ? Dec 1, 2011, 10am - 12pm PT Where ? Register online! Here are some other links for you to follow: blogs.oracle.com/weblogicserver @OracleWebLogic youtube.com/OracleWebLogic facebook.com/OracleWebLogic Steve Button's Blog Jeff West's Blog WebLogic Forums WebLogic @ OTN Almost ready to unwrap the ribbons, pop open the cork, at the start line ... or whatever fits your analogy :-) And in case you are wondering ... here is a snapshot of WebLogic 12c administration console snapshot:

    Read the article

  • Hiding an item conditionally through SPEL in OAF ( VO Extension + Personalization )

    - by Manoj Madhusoodanan
    In this blog I will explain how to conditionally set property of an item through personalization.Let me discuss using a business scenario. My customer wants to make Hold from Payment/ All Invoices column readonly when the Operating Unit is UK ( Configured in a lookup XXCUST_EXCLUDED_ORGS ). Analysis First thing is we have to find out the page and business components. Page: /oracle/apps/pos/supplier/webui/QuickUpdatePGView Object: oracle.apps.pos.supplier.server.SitesVO Solution Download oracle.apps.pos.supplier.server.SitesVO from $JAVA_TOP to JDEV_USER_HOME/myprojects.Make sure the transfer mode of the file (See below table). Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} File Type Transfer Mode .xml ASCII .class Binary .tar Binary .java ASCII  Since there is no VO attribute available to determine the Site Org against the lookup Org we have to add the logic inside a custom VO attribute. So VO extension is required in this scenario. Add an attribute "isPymtReadOnlyStr" in the existing query.This column returns 'Y' if there is a match in the lookup otherwise 'N'. Create a transient attribute "isPymtReadOnly" of type BOOLEAN.This will return TRUE if "isPymtReadOnlyStr" is "Y" otherwise FALSE. The reason behind adding the "isPymtReadOnly" is we are setting the item property as readonly through SPEL.It will recognize only BOOLEAN.But SQL query doesn't support BOOLEAN.So we are building a BOOLEAN attribute from the SQL which will use in the personalization layer to set the item property. Steps 1) Create a new VO xxcust.oracle.apps.pos.supplier.server.XXCUSTSitesVO which extends from oracle.apps.pos.supplier.server.SitesVO. Make sure the binding style should be same as SitesVO. Create the XXCUSTSitesVO which same query of SitesVO.Later we will add the new attribute to XXCUSTSitesVO. At this point of time all the existing VO attributes are of Updatable property as "Always". Press Finish without creating XXCUSTSitesVORowImpl.java 2) Select the XXCUSTSitesVO from JDeveloper Application Navigator. Modify the query and add the extra column. 3) Create a new transient attribute as follows. 4) Once you modify the query all the existing attributes Updatable property will change to Never.So revert that property back to orginal. 5) Create XXCUSTSitesVORowImpl.java by checking the following check box. 6) Add the following code snippet in XXCUSTSitesVORowImpl.java 7) Create the substitution for SitesVO as follows. Following entry will get created in current jpx file.    <Substitutes>      <Substitute OldName ="oracle.apps.pos.supplier.server.SitesVO" NewName ="xxcust.oracle.apps.pos.supplier.server.XXCUSTSitesVO" />   </Substitutes> 8) Migrate XXCUSTSitesVOImpl.java,XXCUSTSitesVORowImpl.java and XXCUSTSitesVO.xml from desktop to actual instance. 9) Migrate the substitution using jpximporter. 10) Restart the server and verify the substitution has done perfectly. 11) Go to /oracle/apps/pos/supplier/webui/QuickUpdatePG and personalize the page.Set the item read only property to ${oa.SitesVO.IsPymtReadOnly} 12) Click on Apply and in next page click on Return to Application. Verify your output.  Note: You can remove the substitution using following script.Please click here.

    Read the article

  • The JRockit Book is Now in Print!

    - by Marcus Hirt
    Yes. I know. It’s been in print for some days already, but I haven’t found time to write about it until now. The book is a good guide for JVM’s in general, and for JRockit in particular. If you’ve ever wondered how the innards of the Java Virtual Machine works, or how to use the JRockit Mission Control to hunt down problems in your Java applications, this book is for you. The book is written for intermediate to advanced Java Developers. These are the chapters: Getting Started Adaptive Code Generation Adaptive Memory Management Threads and Synchronization Benchmarking and Tuning JRockit Mission Control The Management Console The Runtime Analyzer The Flight Recorder The Memory Leak Detector JRCMD Using the JRockit Management APIs JRockit Virtual Edition Appendix A: Bibliography Appendix B: Glossary Index The book is 588 pages long. For more information about the book, see the book page at Packt.

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Tab Sweep - Jazoon aftermath, PaaS press, REST +WebSocket, ...

    - by alexismp
    Recent Tips and News on Java EE 6 & GlassFish: •The GlassFish Tale - Oracle Scene (Markus) • Notes from Jazoon 2011 (Marek) • Jazoon '11 presentations (Jazoon.com) • Enterprise Java upgrade geared to PaaS clouds (TechCentral.ie) • JavaOne 2011: Content review process and Tips for submissions (Arun) • REST + WebSocket applications? Why not using the Atmosphere Framework (Jeanfrancois) • Get your Java 7 screensaver! (Duke)

    Read the article

  • Does C# have a future in games development?

    - by IbrarMumtaz
    I recently learned that the MMO Minecraft is powered by Java from a recent interview on CVG.co.uk on a possible collaboration between two former and now competing colleagues. In the interview he bluntly said that the founder of Minecraft is a Java coder and he is a C or C++ coder so they are incompatible with each other. So collaborating on future projects will be difficult. This got me thinking, If Java could do that? What does the future hold for MS very popular C# language and .Net platform as far as games or mainstream games development is concerned?

    Read the article

  • Test Driven Development with vxml

    - by Malcolm Anderson
    It's been 3 years since I did any coding and am starting back up with Java using netBeans and glassfish.  Right off the bat I noticed two things about Java's ease of use.  The java ide (netBeans) has finally caught up with visual studio, and jUnit, has finally caught up with nUnit.  netBeans intellisense exists and I don't have to subclass everything in jUnit.    Now on to the point of this very short post ( request)   I'm trying to figure out how to do test driven development with vxml and have not found anythnig yet.  I've done my google search, but unfortunately, TDD in IVR land has something to do with helping the hearing impared. I've found a vxml simulator or two, but none of their marketing is getting my hopes up.    My request - if you have done any agile engineering work with vxml, contact me, I need to pick your brain and bring some ideas back to my team.   Thanks in advance.

    Read the article

  • Additional useful skill?

    - by Sergey
    Almost each language has some additional technology or skill or whatever which can work in a pair with it but still be something fresh. For example, Java + Flex. It's a good pair - those who learn Java and want something both useful and new may try Flex. What are "pairs" for the most popular languages(Java, C#, C++, etc.)? PS: Most people advise learning functional programming as an additional skill but this is very fuzzy. They talk about such abstract things as wide programming perspective and other things, but you can hardly say whether these functional skills will be really needed. Yeah, maybe some basics of it can be useful, but serious learning of LISP seems not perspective.

    Read the article

  • Partner Webcast – Weblogic for Developers - 12 July 2012

    - by Thanos
    Oracle Weblogic Server is the industry’s leading application server for deploying Java EE applications with support for new features for lowering cost of operations, improving performance and enhancing scalability. But it’s also a great choice for the Java developers because of the differentiating capabilities that facilitate integration with other tools and frameworks, promote reusability and rapid redeployment of your applications. During the webinar we’re going to explore these differentiation features in more detail. Agenda: Java EE standards support in different Weblogic Server versions Weblogic  Classloading How Weblogic load the classes Filtering classloader Shared libraries Classloader Analisys Tool Spring support in Weblogic Weblogic integration with Apache Maven Advanced deployment features Fast Swap Side by Side deployment Q&A session Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at [email protected] Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • Oracle Exalogic Elactisc Cloud X2-2 est disponible, pour une meilleure expérience de développement dans le cloud

    Oracle Exalogic Elactisc Cloud X2-2 est disponible, pour une meilleure expérience de développement dans le cloud Mise à jour du 08.12.2010 par Katleen Oracle vient d'annoncer ce jour l'arrivée de son infrastructure cloud "Exalogic Elastic Cloud X2-2". Celle-ci servira de base de consolidation pour la quasi globalité des applications Java et non-Java, ainsi que pour les charges de travail. Ce système hardware et software mélange des processeurs x86 cutting-edge 64-bit, une fabrication I/O basée sur InfiniBand ; ainsi qu'un stockage solid-state sur Oracle WebLogic Server. Cette combinaison inédite permet une interaction entre divers composants d'une Java app et l'I/O zero-copy....

    Read the article

  • Installing MATLAB Ubuntu 12.10, persistent "permission denied" problem

    - by Javier
    I'm trying to install MATLAB on Ubuntu 12.10, I had it in my last laptop with 12.04 and I didn't have much trouble to install it. I'm getting this problem: j@jgb:~/Programs/Instaladores/matlab$ ./install Preparing installation files ... Installing ... ./install: 1: eval: /tmp/mathworks_1436/sys/java/jre/glnxa64/jre/bin/java: Permission denied Finished And it doesn't change if I change the permissions to the java file or to the install file as other posts say. In my /etc/fstab file I had this line added (But nothing changes if I comment it): tmpfs /tmp tmpfs nodev,nosuid,noexec,mode=1777 0 0 Thanks

    Read the article

  • ISACA Webcast follow up: Managing High Risk Access and Compliance with a Platform Approach to Privileged Account Management

    - by Darin Pendergraft
    Last week we presented how Oracle Privileged Account Manager (OPAM) could be used to manage high risk, privileged accounts.  If you missed the webcast, here is a link to the replay: ISACA replay archive (NOTE: you will need to use Internet Explorer to view the archive) For those of you that did join us on the call, you will know that I only had a little bit of time for Q&A, and was only able to answer a few of the questions that came in.  So I wanted to devote this blog to answering the outstanding questions.  Here they are. 1. Can OPAM track admin or DBA activity details during a password check-out session? Oracle Audit Vault is monitoring these activities which can be correlated to check-out events. 2. How would OPAM handle simultaneous requests? OPAM can be configured to allow for shared passwords.  By default sharing is turned off. 3. How long are the passwords valid?  Are the admins required to manually check them in? Password expiration can be configured and set in the password policy according to your corporate standards.  You can specify if you want forced check-in or not. 4. Can 2-factor authentication be used with OPAM? Yes - 2-factor integration with OPAM is provided by integration with Oracle Access Manager, and Oracle Adaptive Access Manager. 5. How do you control access to OPAM to ensure that OPAM admins don't override the functionality to access privileged accounts? OPAM provides separation of duties by using Admin Roles to manage access to targets and privileged accounts and to control which operations admins can perform. 6. How and where are the passwords stored in OPAM? OPAM uses Oracle Platform Security Services (OPSS) Credential Store Framework (CSF) to securely store passwords.  This is the same system used by Oracle Applications. 7. Does OPAM support hierarchical/level based privileges?  Is the log maintained for independent review/audit? Yes. OPAM uses the Fusion Middleware (FMW) Audit Framework to store all OPAM related events in a dedicated audit database.  8. Does OPAM support emergency access in the case where approvers are not available until later? Yes.  OPAM can be configured to release a password under a "break-glass" emergency scenario. 9. Does OPAM work with AIX? Yes supported UNIX version are listed in the "certified component section" of the UNIX connector guide at:http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 10. Does OPAM integrate with Sun Identity Manager? Yes.  OPAM can be integrated with SIM using the REST  APIs.  OPAM has direct integration with Oracle Identity Manager 11gR2. 11. Is OPAM available today and what does it cost? Yes.  OPAM is available now.  Ask your Oracle Account Manager for pricing. 12. Can OPAM be used in SAP environments? Yes, supported SAP version are listed in the "certified component section" of the SAP  connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e25327/intro.htm#autoId0 13. How would this product integrate, if at all, with access to a particular field in the DB that need additional security such as SSN's? OPAM can work with DB Vault and DB Firewall to provide the fine grained access control for databases. 14. Is VM supported? As a deployment platform Oracle VM is supported. For further details about supported Virtualization Technologies see Oracle Fusion Middleware Supported System configurations here: http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html 15. Where did this (OPAM) technology come from? OPAM was built by Oracle Engineering. 16. Are all Linux flavors supported?  How about BSD? BSD is not supported. For supported UNIX version see the "certified component section" of the UNIX connector guide http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 17. What happens if users don't check passwords in at the end of a work task? In OPAM a time frame can be defined how long a password can be checked out. The security admin can force a check-in at any given time. 18. is MySQL supported? Yes, supported DB version are listed in the "certified component section" of the DB connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e28315/intro.htm#BABGJJHA 19. What happens when OPAM crashes and you need to use the password? OPAM can be configured for high availability, but if required, OPAM data can be backed up/recovered.  See the OPAM admin guide. 20. Is OPAM Standalone product or does it leverage other components from IDM? OPAM can be run stand-alone, but will also leverage other IDM components

    Read the article

< Previous Page | 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087  | Next Page >