Search Results

Search found 24806 results on 993 pages for 'geographic information sy'.

Page 14/993 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Some Important Information About Search Engine Optimization

    Search Engine Optimization or SEO, is one of the techniques which is used to make your own WebPages more useful and comfortable for your customers by making the WebPages more understandable and transparent to Search Engines. SEO is an economical method which favors your site to get more page views by forming WebPages that rank very high in Search Engine results.

    Read the article

  • JMX Based Monitoring - Part Three - Web App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I showed a technique for integrating a JMX console with Oracle WebLogic which is a standard feature of Oracle WebLogic 11g. Customers on other Web Application servers and other versions of Oracle WebLogic can refer to the documentation provided with the server to do a similar thing. In this blog entry I am going to discuss a new feature that is only present in Oracle Utilities Application Framework 4 and above that allows JMX to be used for management and monitoring the Oracle Utilities Web Applications. In this case JMX can be used to perform monitoring as well as provide the management of the cache. In Oracle Utilities Application Framework you can enable Web Application Server JMX monitoring that is unique to the framework by specifying a JMX port number in RMI Port number for JMX Web setting and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6740 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6740/oracle/ouaf/webAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.base.support.management.mbean.JVMInfo=enabled ouaf.jmx.com.splwg.base.web.mbeans.FlushBean=enabled ouaf.jmx.* files - contain the userid and password. The default setup uses the JMX default security configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see the JMX Site. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes, I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.management.connnector.url.default entry and the credentials you specified in the jmx.remote.x.* files. Remember these are encrypted by default so if you try and view the file you may be able to decipher it visually. For example: There are three Mbeans available to you: flushBean - This is a JMX replacement for the jsp versions of the flush utilities provided in previous releases of the Oracle Utilities Application Framework. You can manage the cache using the provided operations from JMX. The jsp versions of the flush utilities are still provided, for backward compatibility, but now are authorization controlled. JVMInfo - This is a JMX replacement for the jsp version of the JVMInfo screen used by support to get a handle on JVM information. This information is environmental not operational and is used for support purposes. The jsp versions of the JVMInfo utilities are still provided, for backward compatibility, but now is also authorization controlled. JVMSystem - This is an implementation of the Java system MXBeans for use in monitoring. We provide our own implementation of the base Mbeans to save on creating another JMX configuration for internal monitoring and to provide a consistent interface across platforms for the MXBeans. This Mbean is disabled by default and can be enabled using the enableJVMSystemBeans operation. This Mbean allows for the monitoring of the ClassLoading, Memory, OperatingSystem, Runtime and the Thread MX beans. Refer to the Server Administration Guides provided with your product and the Technical Best Practices Whitepaper for information about individual statistics. The Web Application Server JMX monitoring allows greater visibility for monitoring and management of the Oracle Utilities Application Framework application from jconsole or any JSR160 compliant JMX browser or JMX console.

    Read the article

  • Make Dell's OpenManage 6.2 information available through SNMP

    - by tronda
    I have successfully installed OpenManage on a CentOS 5.4 server and I'm able to use OpenManage through the web interface running on port 1311, but I would like to be able to expose this information through the SNMP server. I don't know SNMP particularly well so the configuration is a result of trial and error. I've tried to follow the description pointed out in the Open Manage Server Administrator User Guide. I've followed the documentation regarding SNMP configuration, but without success. I've created a small snmpd.conf file: com2sec notConfigUser default public group notConfigGroup v1 notConfigUser group notConfigGroup v2c notConfigUser view systemview included .1.3.6.1.2.1.1 view systemview included .1.3.6.1.2.1.25.1.1 access notConfigGroup "" any noauth exact all all none view all included .1 rwcommunity public 10.200.26.50 syslocation "Somewhere" syscontact [email protected] pass .1.3.6.1.4.1.4413.4.1 /usr/bin/ucd5820stat smuxpeer .1.3.6.1.4.1.674.10892.1 When I try to fetch SNMP information by using snmpwalk I get the following output: SNMPv2-MIB::sysDescr.0 = STRING: Linux myserver.test.com 2.6.18-164.15.1.el5 #1 SMP Wed Mar 17 11:30:06 EDT 2010 x86_64 SNMPv2-MIB::sysObjectID.0 = OID: NET-SNMP-MIB::netSnmpAgentOIDs.10 DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (1180389) 3:16:43.89 SNMPv2-MIB::sysContact.0 = STRING: [email protected] SNMPv2-MIB::sysName.0 = STRING: myserver.test.com SNMPv2-MIB::sysLocation.0 = STRING: "Somewhere" SNMPv2-MIB::sysORLastChange.0 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORID.1 = OID: SNMPv2-MIB::snmpMIB SNMPv2-MIB::sysORID.2 = OID: TCP-MIB::tcpMIB SNMPv2-MIB::sysORID.3 = OID: IP-MIB::ip SNMPv2-MIB::sysORID.4 = OID: UDP-MIB::udpMIB SNMPv2-MIB::sysORID.5 = OID: SNMP-VIEW-BASED-ACM-MIB::vacmBasicGroup SNMPv2-MIB::sysORID.6 = OID: SNMP-FRAMEWORK-MIB::snmpFrameworkMIBCompliance SNMPv2-MIB::sysORID.7 = OID: SNMP-MPD-MIB::snmpMPDCompliance SNMPv2-MIB::sysORID.8 = OID: SNMP-USER-BASED-SM-MIB::usmMIBCompliance SNMPv2-MIB::sysORDescr.1 = STRING: The MIB module for SNMPv2 entities SNMPv2-MIB::sysORDescr.2 = STRING: The MIB module for managing TCP implementations SNMPv2-MIB::sysORDescr.3 = STRING: The MIB module for managing IP and ICMP implementations SNMPv2-MIB::sysORDescr.4 = STRING: The MIB module for managing UDP implementations SNMPv2-MIB::sysORDescr.5 = STRING: View-based Access Control Model for SNMP. SNMPv2-MIB::sysORDescr.6 = STRING: The SNMP Management Architecture MIB. SNMPv2-MIB::sysORDescr.7 = STRING: The MIB for Message Processing and Dispatching. SNMPv2-MIB::sysORDescr.8 = STRING: The management information definitions for the SNMP User-based Security Model. SNMPv2-MIB::sysORUpTime.1 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.2 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.3 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.4 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.5 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.6 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.7 = Timeticks: (0) 0:00:00.00 SNMPv2-MIB::sysORUpTime.8 = Timeticks: (0) 0:00:00.00 I suspect that I should get some DELL specific information when I use the snmpwalk utility. Is there a configuration in snmpd.conf file which is wrong, or do I have to configure on the OpenManage side in order to get the hardware information accessible from SNMP?

    Read the article

  • Improving SAS multipath to JBOD performance on Linux

    - by user36825
    Hello all I'm trying to optimize a storage setup on some Sun hardware with Linux. Any thoughts would be greatly appreciated. We have the following hardware: Sun Blade X6270 2* LSISAS1068E SAS controllers 2* Sun J4400 JBODs with 1 TB disks (24 disks per JBOD) Fedora Core 12 2.6.33 release kernel from FC13 (also tried with latest 2.6.31 kernel from FC12, same results) Here's the datasheet for the SAS hardware: http://www.sun.com/storage/storage_networking/hba/sas/PCIe.pdf It's using PCI Express 1.0a, 8x lanes. With a bandwidth of 250 MB/sec per lane, we should be able to do 2000 MB/sec per SAS controller. Each controller can do 3 Gb/sec per port and has two 4 port PHYs. We connect both PHYs from a controller to a JBOD. So between the JBOD and the controller we have 2 PHYs * 4 SAS ports * 3 Gb/sec = 24 Gb/sec of bandwidth, which is more than the PCI Express bandwidth. With write caching enabled and when doing big writes, each disk can sustain about 80 MB/sec (near the start of the disk). With 24 disks, that means we should be able to do 1920 MB/sec per JBOD. multipath { rr_min_io 100 uid 0 path_grouping_policy multibus failback manual path_selector "round-robin 0" rr_weight priorities alias somealias no_path_retry queue mode 0644 gid 0 wwid somewwid } I tried values of 50, 100, 1000 for rr_min_io, but it doesn't seem to make much difference. Along with varying rr_min_io I tried adding some delay between starting the dd's to prevent all of them writing over the same PHY at the same time, but this didn't make any difference, so I think the I/O's are getting properly spread out. According to /proc/interrupts, the SAS controllers are using a "IR-IO-APIC-fasteoi" interrupt scheme. For some reason only core #0 in the machine is handling these interrupts. I can improve performance slightly by assigning a separate core to handle the interrupts for each SAS controller: echo 2 /proc/irq/24/smp_affinity echo 4 /proc/irq/26/smp_affinity Using dd to write to the disk generates "Function call interrupts" (no idea what these are), which are handled by core #4, so I keep other processes off this core too. I run 48 dd's (one for each disk), assigning them to cores not dealing with interrupts like so: taskset -c somecore dd if=/dev/zero of=/dev/mapper/mpathx oflag=direct bs=128M oflag=direct prevents any kind of buffer cache from getting involved. None of my cores seem maxed out. The cores dealing with interrupts are mostly idle and all the other cores are waiting on I/O as one would expect. Cpu0 : 0.0%us, 1.0%sy, 0.0%ni, 91.2%id, 7.5%wa, 0.0%hi, 0.2%si, 0.0%st Cpu1 : 0.0%us, 0.8%sy, 0.0%ni, 93.0%id, 0.2%wa, 0.0%hi, 6.0%si, 0.0%st Cpu2 : 0.0%us, 0.6%sy, 0.0%ni, 94.4%id, 0.1%wa, 0.0%hi, 4.8%si, 0.0%st Cpu3 : 0.0%us, 7.5%sy, 0.0%ni, 36.3%id, 56.1%wa, 0.0%hi, 0.0%si, 0.0%st Cpu4 : 0.0%us, 1.3%sy, 0.0%ni, 85.7%id, 4.9%wa, 0.0%hi, 8.1%si, 0.0%st Cpu5 : 0.1%us, 5.5%sy, 0.0%ni, 36.2%id, 58.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu6 : 0.0%us, 5.0%sy, 0.0%ni, 36.3%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu7 : 0.0%us, 5.1%sy, 0.0%ni, 36.3%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu8 : 0.1%us, 8.3%sy, 0.0%ni, 27.2%id, 64.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu9 : 0.1%us, 7.9%sy, 0.0%ni, 36.2%id, 55.8%wa, 0.0%hi, 0.0%si, 0.0%st Cpu10 : 0.0%us, 7.8%sy, 0.0%ni, 36.2%id, 56.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu11 : 0.0%us, 7.3%sy, 0.0%ni, 36.3%id, 56.4%wa, 0.0%hi, 0.0%si, 0.0%st Cpu12 : 0.0%us, 5.6%sy, 0.0%ni, 33.1%id, 61.2%wa, 0.0%hi, 0.0%si, 0.0%st Cpu13 : 0.1%us, 5.3%sy, 0.0%ni, 36.1%id, 58.5%wa, 0.0%hi, 0.0%si, 0.0%st Cpu14 : 0.0%us, 4.9%sy, 0.0%ni, 36.4%id, 58.7%wa, 0.0%hi, 0.0%si, 0.0%st Cpu15 : 0.1%us, 5.4%sy, 0.0%ni, 36.5%id, 58.1%wa, 0.0%hi, 0.0%si, 0.0%st Given all this, the throughput reported by running "dstat 10" is in the range of 2200-2300 MB/sec. Given the math above I would expect something in the range of 2*1920 ~= 3600+ MB/sec. Does anybody have any idea where my missing bandwidth went? Thanks!

    Read the article

  • Quick guide to Oracle IRM 11g: Server installation

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index This is the first of a set of articles designed to assist with the successful installation, configuration and deployment of a document security solution using Oracle IRM. This article goes through a set of simple instructions which detail how to download, install and configure the IRM server, the starting point for building a document security solution. This article contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Downloading the software Preparing a database Creating the schema WebLogic Server installation Installing Oracle IRM Introduction Because we are using Oracle Enterprise Linux in this guide, and before we get into the detail of IRM, i'd like to share some tips with Linux to make life a bit easier.Use a 64bit platform, IRM 11g runs just fine on a 32bit server but with 64bit you will build a more future proof service. Download and install the latest Java JDK package. Make sure you get the 64bit version if you are on a 64bit server. Configure Linux to use a good Yum server to simplify installing packages. For Oracle Enterprise Linux we maintain a great public Yum here. Have at least 20GB of free disk space on the partition you intend to install the IRM server. The downloads are big, then you extract them and then install. This quickly consumes disk space which you can easily recover by deleting the downloaded and extracted files after wards. But it's nice to have the disk space spare to keep these around in case you need to restart any part of the installation process again. Downloading the software OK, so before you can do anything, you need the software install kits. Luckily Oracle allows you to freely download every technology we create. You'll need to get the following; Oracle WebLogic Server Oracle Database Oracle Repository Creation Utility (rcu) Oracle IRM server You can use Microsoft SQL server 2005 or 2008, in this guide i've used Oracle RDBMS 11gR2 for Linux. Preparing the database I'm not going to go through the finer points of installing the database. There are many very good guides on installing the Oracle Database. However one thing I would suggest you think about is enabling TDE, network encryption and using Database Vault. These Oracle database security technologies are excellent for creating a complete end to end security solution. No point in going to all the effort to secure document access with IRM when someone can go directly to the database and assign themselves rights to documents. To understand this further, you can see a video of the IRM service using these database security technologies here. With a database up and running we need to create a schema to hold the IRM data. This schema contains the rights model, cryptographic keys, user account id's and associated rights etc. Creating the IRM database schema Oracle uses the Repository Creation Tool which builds your schema, extract the files from the rcu zip. Then in a terminal window; cd /oracle/install/rcu/bin ./rcu This will launch the Repository Creation Tool and you will be presented with the image to the right. Hit next and continue onto the next dialog. You are asked if you are going to be creating a new schema or wish to drop an existing one, you obviously just need to click next at this point to create a new schema. The RCU next needs to know where your database is so you'll need the following details of your database instance. Below, for reference, is the information for my installation. Hostname: irm.oracle.demo Port: 1521 (This is the default TCP port for the Oracle Database) Service Name: irm.oracle.demo. Note this is not the SID, but the service name. Username: sys Password: ******** Role: SYSDBA And then select next. Because the RCU contains schemas for many of the Oracle Technologies, you now need to select to just deploy the Oracle IRM schema. Open the section under "Enterprise Content Management" and tick the "Oracle Information Rights Management" component. Note that you also get the chance to select a prefix which defaults to "DEV" (for development). I usually change this to something that reflects my own install. PROD for a production system, INT for internal only etc. The next step asks for the passwords for the schema users. We are only creating one schema here so you just enter one password. Some brave souls store this password in an Excel spreadsheet which is then secure against the IRM server you're about to install in this guide. Nearing the end of the schema creation is the mapping of the tablespaces to the schema. Note I had setup a table space already that was encrypted using TDE and at this point I was able to select that tablespace by clicking in the "Default Tablespace" column. The next dialog confirms your actions and clicking on next causes it to create the schema and default data. After this you are presented with the completion summary. WebLogic Server installation The database is now ready and the next step is to install the application server. Oracle IRM 11g is a JEE application and currently only supported in Oracle WebLogic Server. So the next step is get WebLogic Server installed, which is pretty easy. Depending on the version you download, you either run the binary or for a 64 bit platform (like mine) run the following command. java -d64 -jar wls1033_generic.jar And in the resulting dialog hit next to start walking through the install. Next choose a directory into which you will install WebLogic Server. I like to change from the default and install into /oracle/. Then all my software goes into this one folder, all owned by the "oracle" user. The next dialog asks for your Oracle support information to ensure you are kept up to date. If you have an Oracle support account, enter your details but for most evaluation systems I leave these fields blank. Again, for evaluation or development systems, I usually stick with the "Typical" install type which you are next asked for. Next you are asked for the JDK which will be used for the server. When installing from the generic jar on a 64bit platform like in this guide, no JDK is bundled with the installer. But as you can see in the image on the right, that it does a good job of detecting the one you've got installed. Defaults for the install directories are usually taken, no changes here, just click next. And finally we are ready to install, hit next, sit back and relax. Typically this takes about 10 minutes. After the install, do not run the quick start, we need to deploy the IRM install itself from which we will create a new WebLogic domain. For now just hit done and lets move to the final step of the installation process. Installing Oracle IRM The last piece of the puzzle to getting your environment ready is to deploy the IRM files themselves. Unzip the Oracle Enterprise Content Management 11g zip file and it will create a Disk1 directory. Switch to this folder and in the console run ./runInstaller. This will launch the installer which will also ask for the location of the JDK. Look at the image on the right for the detail. You should now see the first stage of the IRM installation. The dialog warns you need to have a WebLogic server installed and have created the schema's, but you've just done all that above (I hope) so we are ready to go. The installer now checks that you have all the required libraries installed and other system parameters are correct. Because nearly all of my development and evaluation installations have the database server on the same system, the installer passes these checks without issue... Next... Now chose where to install the IRM files, you must install into the same Middleware Home as the WebLogic Server installation you just performed. Usually the installer already defaults to this location anyway. I also tend to change the Oracle Home Directory to Oracle_IRM so it's clear this is just an IRM install. The summary page tells you about space needed to deploy the files. Unfortunately the IRM install comes with all of the other Oracle ECM software, you can't just select the IRM files, everything gets deployed to disk and uses 1.6GB of space! Not fun, but Oracle has to package up similar technologies otherwise we would have a very large number of installers to QA and manage, again, not fun. Hit Install, time for another drink, maybe a piece of cake or a donut... on a half decent system this part of the install took under 10 minutes. Finally the installation of your IRM server is complete, click on finish and the next phase is to create the WebLogic domain and start configuring your server. Now move onto the next article in this guide... configuring your IRM server ready to seal your first document.

    Read the article

  • Common Areas For Securing Web Services

    The only way to truly keep a web service secure is to host it on a web server and then turn off the server. In real life no web service is 100% secure but there are methodologies for increasing the security around web services. In order for consumers of a web service they must adhere to the service’s Service-Level Agreement (SLA).  An SLA is a digital contract between a web service and its consumer. This contract defines what methods and protocols must be used to access the web service along with the defined data formats for sending and receiving data through the service. If either part does not abide by the contract then the service will not be accessible for consumption. Common areas for securing web services: Universal Discovery Description Integration  (UDDI) Web Service Description Language  (WSDL) Application Level Network Level “UDDI is a specification for maintaining standardized directories of information about web services, recording their capabilities, location and requirements in a universally recognized format.” (UDDI, 2010) WSDL on the other hand is a standardized format for defining a web service. A WSDL describes the allowable methods for accessing the web service along with what operations it performs. Web services in the Application Level can control access to what data is available by implementing its own security through various methodologies but the most common method is to have a consumer pass in a token along with a system identifier so that they system can validate the users access to any data or actions that they may be requesting. Security restrictions can also be applied to the host web server of the service by restricting access to the site by IP address or login credentials. Furthermore, companies can also block access to a service by using firewall rules and only allowing access to specific services on certain ports coming from specific IP addresses. This last methodology may require consumers to obtain a static IP address and then register it with the web service host so that they will be provide access to the information they wish to obtain. It is important to note that these areas can be secured in any combination based on the security level tolerance dictated by the publisher of the web service. This being said, the bare minimum security implantation must be in the Application Level within the web service itself. Typically I create a security layer within a web services exposed Internet that requires a consumer identifier and a consumer token. This information is then used to authenticate the requesting consumer before the actual request is performed. Refernece:UDDI. (2010). Retrieved 11 13, 2011, from LooselyCoupled.com: http://www.looselycoupled.com/glossary/UDDIService-Level Agreement (SLA). (n.d.). Retrieved 11 13, 2011, from SearchITChannel: http://searchitchannel.techtarget.com/definition/service-level-agreement

    Read the article

  • Create association between informations

    - by Andrea Girardi
    I deployed a project some days ago that allow to extract some medical articles using the results of a questionnaire completed by a user. For instance, if I reply on questionnaire I'm affected by Diabetes type 2 and I'm a smoker, my algorithm extracts all articles related to diabetes bubbling up all articles contains information about Diabetes type 2 and smoking. Basically we created a list of topic and, for every topic we define a kind of "guideline" that allows to extract and order informations for a user. I'm quite sure there are some better way to put on relationship two content but I was not able to find them on network. Could you suggest my a model, algorithm or paper to better understand this kind of problem and that helps me to find a faster, and more accurate way to extract information for an user?

    Read the article

  • Is a yobibit really a meaningful unit? [closed]

    - by Joe
    Wikipedia helpfully explains: The yobibit is a multiple of the bit, a unit of digital information storage, prefixed by the standards-based multiplier yobi (symbol Yi), a binary prefix meaning 2^80. The unit symbol of the yobibit is Yibit or Yib.1[2] 1 yobibit = 2^80 bits = 1208925819614629174706176 bits = 1024 zebibits[3] The zebi and yobi prefixes were originally not part of the system of binary prefixes, but were added by the International Electrotechnical Commission in August 2005.[4] Now, what in the world actually takes up 1,208,925,819,614,629,174,706,176 bits? The information content of the known universe? I guess this is forward thinking -- maybe astrophyics or nanotech, or even DNA analysis really will require these orders of magnitude. How far off do you think all this is? Are these really meaningful units?

    Read the article

  • Transfer some information from 1 workbook to another plus 1

    - by Cheryl
    I have 1 workbook with 4 sheets. Some of the information is auto entered from 1 sheet to another. I do not save when closing out. It is entered, printed and deleted. I need to get some of the information off those sheets to a seperate workbook that I save. Example: worksheet 1 info Name dob reason etc I am wondering if I can transfer that information to another sheet to be entered on row one and then the next one on row 2 and so on. Since I do not save the first workbook, will this work?

    Read the article

  • Information about a file or directory

    - by Tim
    In Linux, the information about a file or directory is stored in its inode. I was wondering what is the data structure for information about a file or directory in Windows 7? How to get the information about a file or directory in Linux and in Windows 7, in terminal and command line window? Is the owner of a file or directory always its creator? Will it be able to change? Is there a creation timestamp for a file in Linux and in Windows 7? How to get it? Thanks and regards!

    Read the article

  • How to display information contained in XML file from another website

    - by Tristan
    Hello, I have an XML file ( XML file I produce ) which contains information about my parteners. I want them to display on their website information relative to them by picking them into the XML file. I have no idea to do that, ecxept that i need to write a 'parser' in javascript to display information. could you please provide me examples to do that ? (how to write a parser, how to display only information for one partener ?) Thank you, Regards

    Read the article

  • cannot modify header information [closed]

    - by mohanraj
    Possible Duplicate: cannot modify header information Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\register&login\logout.php:1) in C:\xampp\htdocs\register&login\logout.php on line 4 Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\register&login\logout.php:1) in C:\xampp\htdocs\register&login\logout.php on line 5 Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\register&login\logout.php:1) in C:\xampp\htdocs\register&login\logout.php on line 6

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • How to add security zone information to files?

    - by user33938
    I recently enabled "Do not preserve zone information in file attachments", to get rid that annoying "Do you want to run this program" security warning. Now, how can I add this information to a file that doesn't have it? I would like to get that warning back on certain files.

    Read the article

  • how to diagnosis and resolve: /usr/lib64/libz.so.1: no version information available

    - by matchew
    I had a hell of a time installing lxml for python2.7 on centOs5.6. For some background, python2.7 is an alternative installation of python on centOS5.6 which comes with python2.4 installed. it was bulit from source per its instrucitons ./configure make make altinstall However, after about 20 hours of trying I managed to find a workable solution and was able to install lxml. Until, I notice the following error at the top of the interpreter: python2.7: /usr/lib64/libz.so.1: no version information available (required by python2.7) Python 2.7.2 (default, Jun 30 2011, 18:55:26) [GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print 'Sheeeeut!' this error is printed out everytime I run a script. For example: $ ./test.py /usr/local/bin/python2.7: /usr/lib64/libz.so.1: no version information available (required by /usr/local/bin/python2.7) the script runs flawlessly, but this error is bothersome. After some digging I have seem to believe I have a wrong version of libz installed, that it is either an older version or built for a different platform. I'm not quite sure how, I've only installed libz through yum, as far as I know. Although, I can't quite remember every little thing I tried in my twenty hours of trying. You may also be intereted in what my lib64 folder looks like, here is some information $ ls -ltrh libz* -rwxr-xr-x 1 root root 84K Jan 9 2007 libz.so.1.2.3 -rwxr-xr-x 1 root root 107K Jan 9 2007 libz.a -rwxr-xr-x 1 root root 154K Feb 22 23:30 libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Apr 20 20:46 libz.so.1 -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jun 30 18:43 libzdb.so.7 -> libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Jul 1 11:35 libz.so -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jul 1 11:35 libzdb.so -> libzdb.so.7.0.2 notice: the items that Say Jul 1st or Jun 30th are from me. I had initially moved these files into a backup folder as they seeemed to be 1. duplicates and 2. had a date after/during my problems I alluded to earlier that I had with lxml One inclination is to completely remove python2.7 and re-install. I think having it install to /usr/local/ was a poor default choice. However, without the make uninstall option being present it seems to be a time consuming task for a solution I am not quite sure would solve my problem.

    Read the article

  • Remove identifying information from SSH.

    - by The Rook
    When I do an nmap -sV 127.0.0.1 -p 22 of my system I get the following information: SF-Port22-TCP:V=4.62%I=7%D=11/9%Time=4916402C%P=i686-pc-linux-gnu%r(NULL,2 SF:7,"SSH-2.0-OpenSSH_5.1p1\x20Debian-3ubuntu1\r\n"); How do I go about chaining these two pieces of information? i686-pc-linux-gnu and SSH-2\.0-OpenSSH_5\.1p1\x20Debian-3ubuntu1.

    Read the article

  • How to test if DNS information has propagated?

    - by Andrew
    I set up a new DNS entry for one of my subdomains (I haven't set up any Apache virtual hosts or anything like that yet). How can I check that the DNS information has propagated? I assumed that I could simply ping my.subdomain.com and assume that if it could resolve, it would show the IP address I specified in the A record. However, I don't know if I am assuming correctly. What is the best way to check this information?

    Read the article

  • grabbing/parsing iSCSI iface information

    - by chrisg
    I'm writing a puppet provider for iSCSI and want to grab information about the ifaces (in my case HBAs) we have, is there a better way than doing this: iscsiadm -m iface -I be2iscsi.00:00:00:00:00:00|grep iface.ipaddress|sed -e 's/iface.ipaddress = //' it looks pretty ugly, but the -n switch doesn't seem to work unless you're in --op=update is there a better way to grab this information, in particular in ruby?

    Read the article

  • Oracle IRM video demonstration of seperating duties of document security

    - by Simon Thorpe
    One thing an Information Rights Management technology should do well is separate out three main areas of responsibility.The business process of defining and controlling the classifications to which content is secured and the definition of the roles employees, customers, partners and contractors have when accessing secured content. Allow IT to manage the server and perform the role of authorizing the creation of new classifications to meet business needs but yet once the classification has been created and handed off to the business, IT no longer plays a role on the ongoing management. Empower the business to take ownership of classifications to which their own content is secured. For example an employee who is leading an acquisition project should be responsible for defining who has access to confidential project documents. This person should be able to manage the rights users have in the classification and also be the point of contact for those wishing to gain rights. Oracle IRM has since it's creation in the late 1990's had this core model at the heart of its design. Due in part to the important seperation of rights from the documents themselves, Oracle IRM places the right functionality within the right parts of the business. For example some IRM technologies allow the end user to make decisions about what users can print, edit or save a secured document. This in practice results in a wide variety of content secured with a plethora of options that don't conform to any policy. With Oracle IRM users choose from a list of classifications to which they have been given the ability to secure information against. Their role in the classification was given to them by the business owner of the classification, yet the definition of the role resides within the realm of corporate security who own the overall business classification policies. It is this type of design and philosophy in Oracle IRM that makes it an enterprise solution that works beyond a few users and a few secured documents to hundreds of thousands of users and millions of documents. This following video shows how Oracle IRM 11g, the market leading document security solution, lets the security organization manage and create classifications whilst the business owns and manages them. If you want to experience using Oracle IRM secured content and the effects of different roles users have, why not sign up for our free demonstration.

    Read the article

  • The Complementary Roles of PLM and PIM

    - by Ulf Köster
    Oracle Product Value Chain Solutions (aka Enterprise PLM Solutions) are a comprehensive set of product management solutions that work together to provide Oracle customers with a broad array of capabilities to manage all aspects of product life: innovation, design, launch, and supply chain / commercialization processes beyond the capabilities and boundaries of traditional engineering-focused Product Lifecycle Management applications. They support companies with an integrated managed view across the product value chain: From Lab to Launch, From Farm to Fork, From Concept to Product to Customer, From Product Innovation to Product Design and Product Commercialization. Product Lifecycle Management (PLM) represents a broad suite of software solutions to improve product-oriented business processes and data. PLM success stories prove that PLM helps companies improve time to market, increase product-related revenue, reduce product costs, reduce internal costs and improve product quality. As a maturing suite of enterprise solutions, PLM is still evolving to realize the promise it can provide across all facets of a business and all phases of the product lifecycle. The vision for PLM includes everything from gathering early requirements for a product through multiple stages of the product lifecycle from product design, through commercialization and eventual product retirement or replacement. In discrete or process industries, PLM is typically more focused on Product Definition as items with respect to the technical view of a material or part, including specifications, bills of material and manufacturing data. With Agile PLM, this is specifically related to capabilities addressing Product Collaboration, Governance and Compliance, Product Quality Management, Product Cost Management and Engineering Collaboration. PLM today is mainly addressing key requirements in the early product lifecycle, in engineering changes or in the “innovation cycle”, and primarily adds value related to product design, development, launch and engineering change process. In short, PLM is the master for Product Definition, wherever manufacturing takes place. Product Information Management (PIM) is a product suite that has evolved in parallel to PLM. Product Information Management (PIM) can extend the value of PLM implementations by providing complementary tools and capabilities. More relevant in the area of Product Commercialization, the vision for PIM is to manage product information throughout an enterprise and supply chain to improve product-related knowledge management, information sharing and synchronization from multiple data sources. PIM success stories have shown the ability to provide multiple benefits, with particular emphasis on reducing information complexity and information management costs. Product Information in PIM is typically treated as the commercial view of a material or part, including sales and marketing information and categorization. PIM collects information from multiple manufacturing sites and multiple suppliers into its repository, but also provides integration tools to push the information back out to the other systems, serving as an active central repository with the aim to provide a holistic view on any product sold by a company (hence the name “Product Hub”). In short, PIM is the master of commercial Product Information. So PIM is quickly becoming mandatory because of its value in optimizing multichannel selling processes and relationships with customers, as you can see from the following table: Viewpoint PLM Current State PIM Key Benefits PIM adds to PLM Product Lifecycle Primarily R&D Front end Innovation Cycle Change process Primarily commercial / transactional state of lifecycle Provides a seamless information flow from design and manufacturing through the ultimate selling and servicing of products Data Primarily focused on “item” vs. “product” data Product structures Specifications Technical information Repository for all product information. Reaches out to entire enterprise and its various silos of product information and descriptions Provides a “trusted source” of accurate product information to the internal organization and trading partners Data Lifecycle Repository for all design iterations Historical information Released, current information, with version management and time stamping Provides a single location to track and audit historical product information Communication PLM release finished product to ERP PLM is the master for Product Definition Captures information from disparate sources, including in-house data stores Recognizes the reality of today’s data “mess” across information silos Provides the ability to package product information to its audience in the desired, relevant format to meet their exacting business requirements Departmental R&D Manufacturing Quality Compliance Procurement Strategic Marketing Focus on Marketing and Sales Gathering information from other Departments, multiple sites, multiple suppliers A singular enterprise solution that leverages existing information silos and data stores Supply Chain Multi-site internal collaboration Supplier collaboration Customer collaboration Works with customers, exchanges / data pools, and trading partners to provide relevant product information packaged the way the customer desires Provides ability to provide trading partners and internal customers with information in a manner they desire, continuously Tools Data Management Collaboration Innovation Management Cleansing Synchronization Hub functions Consistent, clean and complete commercial product information The goals of both PLM and PIM, put simply, are to help companies make more profit from their products. PLM and PIM solutions can be easily added as they share some of the same goals, while coming from two different perspectives: the definition of the product and the commercialization of the product. Both can serve as a form of product “system of record”, but take different approaches to delivering value. Oracle Product Value Chain solutions offer rich new strategies for executives to collectively leverage Agile PLM, Product Data Hub, together with Enterprise Data Quality for Products, and other industry leading Oracle applications to achieve further incremental value, like Oracle Innovation Management. This is unique on the market today.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >