Search Results

Search found 108378 results on 4336 pages for 'oracle user experience'.

Page 403/4336 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Best Party of 2011: Introducing Java 7

    - by Tori Wieldt
    As a member of the Java community, you played a critical role in building Java 7. You contributed great ideas for new features and new ways of working and collaborating to take the next step in development. And now, it’s time to celebrate with a global gathering of the Java community—online and live. See your ideas at work. Hear about everything Java 7 can do for you and how we’re moving Java forward together. Join us for celebrations in Redwood Shores, São Paulo, or London—as we unveil the latest innovations in Java 7. The three events will be joined with each other by satellite, and will be available as a webcast if you can't attend the live events. Learn from fellow developers around the globe who are getting the most out of the new features. Get overviews from the Java experts on Project Coin, the Fork/Join framework, the new file system API, improvements to the VM, and a panel discussion with Q & A. Thursday, July 07, 2011 Redwood Shores, United States: 9:00 a.m. PT - 1:30pm PT São Paulo, Brazil: 1:00 p.m BRT London, England: 5:00 p.m. BST Live Webcast: 9:00 a.m. PT - 1:30pm PT  Get more information about the July 7 events. You need to register for the live events or webcast. There will also be other celebrations at Java User Group (JUG) meetings for the next few months.Find your local JUG. Follow the conversation on Twitter: follow @Java and use #java7 Java is moving forward, let's party!

    Read the article

  • How You Helped Shape Java EE 7...

    - by reza_rahman
    I have been working with the JCP in various roles since EJB 3/Java EE 5 (much of it on my own time), eventually culminating in my decision to accept my current role at Oracle (despite it's inevitable set of unique challenges, a role I find by and large positive and fulfilling). During these years, it has always been clear to me that pretty much everyone in the JCP genuinely cares about openness, feedback and developer participation. Perhaps the most visible sign to date of this high regard for grassroots level input is a survey on Java EE 7 gathered a few months ago. The survey was designed to get open feedback on a number of critical issues central to the Java EE 7 umbrella specification including what APIs to include in the standard. When we started the survey, I don't think anyone was certain what the level of participation from developers would really be. I also think everyone was pleasantly surprised that a large number of developers (around 1100) took the time out to vote on these very important issues that could impact their own professional life. And it wasn't just a matter of the quantity of responses. I was particularly impressed with the quality of the comments made through the survey (some of which I'll try to do justice to below). With Java EE 7 under our belt and the horizons for Java EE 8 emerging, this is a good time to thank everyone that took the survey once again for their thoughts and let you know what the impact of your voice actually was. As an aside, you may be happy to know that we are working hard behind the scenes to try to put together a similar survey to help kick off the agenda for Java EE 8 (although this is by no means certain). I'll break things down by the questions asked in the survey, the responses and the resulting change in the specification. APIs to Add to Java EE 7 Full/Web Profile The first question in the survey asked which of four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. Developers by and large wanted all the new APIs added to the full platform. The comments expressed particularly strong support for WebSocket and JCache. Others expressed dissatisfaction over the lack of a JSON binding (as opposed to JSON processing) API. WebSocket, JSON-P and JBatch are now part of Java EE 7. In addition, the long-awaited Java EE Concurrency Utilities API was also included in the Full Profile. Unfortunately, JCache was not finalized in time for Java EE 7 and the decision was made not to hold up the Java EE release any longer. JCache continues to move forward strongly and will very likely be included in Java EE 8 (it will be available much sooner than Java EE 8 to boot). An emergent standard for JSON-B is also a strong possibility for Java EE 8. When it came to the Web Profile, developers were supportive of adding WebSocket and JSON-P, but not JBatch and JCache. Both WebSocket and JSON-P are now part of the Web Profile, now also including the already popular JAX-RS API. Enabling CDI by Default The second question asked whether CDI should be enabled in Java EE by default. The overwhelming majority of developers supported the default enablement of CDI. In addition, developers expressed a desire for better CDI/Java EE alignment (with regards to EJB and JSF in particular). Some developers expressed legitimate concerns over the performance implications of enabling CDI globally as well as the potential conflict with other JSR 330 implementations like Spring and Guice. CDI is enabled by default in Java EE 7. Respecting the legitimate concerns, CDI 1.1 was very careful to add additional controls around component scanning. While a lot of work was done in Java EE 6 and Java EE 7 around CDI alignment, further alignment is under serious consideration for Java EE 8. Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations (e.g. @BatchContext). A majority of developers wanted consistent usage of @Inject. The comments again reflected a strong desire for CDI/Java EE alignment. A lot of emphasis in Java EE 7 was put into using @Inject consistently. For example, the JBatch specification is focused on using @Inject wherever possible. JAX-RS remains an exception with it's existing custom injection annotations. However, the JAX-RS specification leads understand the importance of eventual convergence, hopefully in Java EE 8. Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A solid majority of developers supported the idea of making @Stereotype more universal in Java EE. The comments maintained the general theme of strong support for CDI/Java EE alignment Unfortunately, there was not enough time and resources in Java EE 7 to implement this fairly pervasive feature. However, it remains a serious consideration for Java EE 8. Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE. Developers strongly supported the concept. Along with injection, interceptors are now supported across all Java EE 7 components including Servlets, Filters, Listeners, JAX-WS endpoints, JAX-RS resources, WebSocket endpoints and so on. I hope you are encouraged by how your input to the survey helped shape Java EE 7 and continues to shape Java EE 8. Participating in these sorts of surveys is of course just one way of contributing to Java EE. Another great way to stay involved is the Adopt-A-JSR Program. A large number of developers are already participating through their local JUGs. You could of course become a Java EE JSR expert group member or observer. You should stay tuned to The Aquarium for the progress of Java EE 8 JSRs if that's something you want to look into...

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • Scripted SOA Diagnostic Dumps for PS6 (11.1.1.7)

    - by ShawnBailey
    When you upgrade to SOA Suite PS6 (11.1.1.7) you acquire a new set of Diagnostic Dumps in addition to what was available in PS5. With more than a dozen to choose from and not wanting to run them one at a time, this blog post provides a sample script to collect them all quickly and hopefully easily. There are several ways that this collection could be scripted and this is just one example. What is Included: wlst.properties: Ant Properties build.xml soa_diagnostic_script.py: Python Script What is Collected: 5 contextual thread dumps at 5 second intervals Diagnostic log entries from the server WLS Image which includes the domain configuration and WLS runtime data Most of the SOA Diagnostic Dumps including those for BPEL runtime, Adapters and composite information from MDS Instructions: Download the package and extract it to a location of your choosing Update the properties file 'wlst.properties' to match your environment Run 'ant' (must be on the path) Collect the zip package containing the files (by default it will be in the script.output location) Properties Reference: oracle_common.common.bin: Location of oracle_common/common/bin script.home: Location where you extracted the script and supporting files script.output: Location where you want the collections written username: User name for server connection pwd: Password to connect to the server url: T3 URL for server connection, '<host>:<port>' dump_interval: Interval in seconds between thread dumps log_interval: Duration in minutes that you want to go back for diagnostic log information Script Package

    Read the article

  • XML Rules Engine and Validation Tutorial with NIEM

    - by drrwebber
    Our new XML Validation Framework tutorial video is now available. See how to easily integrate code-free adaptive XML validation services into your web services using the Java CAMV validation engine. CAMV allows you to build fault tolerant content checking with XPath that optionally use SQL data lookups. This can provide warnings as well as error conditions to tailor your validation layer to exactly meet your business application needs. Also available is developing test suites using Apache ANT scripting of validations.  This allows a community to share sets of conformance checking test and tools . On the technical XML side the video introduces XPath validation rules and illustrates and the concepts of XML content and structure validation. CAM validation templates allow contextual parameter driven dynamic validation services to be implemented compared to using a static and brittle XSD schema approach.The SQL table lookup and code list validation are discussed and examples presented.Features are highlighted along with a demonstration of the interactive generation of actual live XML data from a SQL data store and then validation processing complete with errors and warnings detection.The presentation provides a primer for developing web service XML validation and integration into a SOA approach along with examples and resources. Also alignment with the NIEM IEPD process for interoperable information exchanges is discussed along with NIEM rules services.The CAMV engine is a high performance scalable Java component for rapidly implementing code-free validation services and methods. CAMV is a next generation WYSIWYG approach that builds from older Schematron coding based interpretative runtime tools and provides a simpler declarative metaphor for rules definition. See: http://www.youtube.com/user/TheCAMeditor

    Read the article

  • JavaOne 2012 in Review

    - by Janice J. Heiss
    Noted freelance writer Steve Meloan has a new article up on otn/java, titled, “JavaOne 2012 Review: Make the Future Java” in which he summarizes the happenings at JavaOne 2012. Along the way, he reminds us that if the future turns out to be anything like the past, Java will do fine: The repeated theme for this year's conference was ‘Make the Future Java,’ and according to recent stats, the groundwork is already firmly in place:    There are 9 million Java developers worldwide.    Three billion devices run Java.    Five billion Java Cards are in use.    One hundred percent of Blu-ray Disc players ship with Java.    Ninety-seven percent of enterprise desktops run Java.    Eighty-nine percent of PC desktops run Java.This year's content curriculum program was organized under seven technical tracks:    Core Java Platform    Development Tools and Techniques    Emerging Languages on the JVM    Enterprise Service Architectures and the Cloud    Java EE Web Profile and Platform Technologies    Java ME, Java Card, Embedded, and Devices    JavaFX and Rich User Experiences”Meloan artfully reminds us of how JavaOne makes learning fun. Have a look at the article here.

    Read the article

  • Web Applications Desktop Integrator (WebADI) Feature for Install Base Mass Update in 12.1.3

    - by LuciaC
    Purpose The integration of WebADI technology with the Install Base Mass Update function is designed to make creation and update of bulk item instances much easier than in the past. What is it? WebADI is an Excel-based desktop application where users can download an Excel template with item instances pre-populated based on search criteria.  Users can create and update item instances in the Excel sheet and finally upload the Excel data using an "upload" option available in the Excel menu. On upload, the modified data will bulk upload to interface tables which are processed by an asynchronous concurrent program that users can monitor for the uploaded results. Advantage: This allows users to work in a disconnected Environment: session time outs can be avoided, as once a template is downloaded the user can work in a disconnected environment and once all updates are done the new input can be uploaded. Also the data can be saved for later update and upload. For more details review the following: R12.1.3 Install Base WebADI Mass Update Feature (Doc ID 1535936.1) How To Use Install Base WebADI Mass Update Feature In Release 12.1.3 (Doc ID 1536498.1).

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • Building MySQL with boost on windows

    - by user13177919
    As you've probably heard already MySQL needs boost to build. However, in the good ol' MySQL tradition, the above link does give you only the instructions on how to build it on linux. And completely ignores the fact that there're other OSes too that people develop on. To fill in that gap, I've compiled a small step by step guide on how to do it on windows. Note that I always, as a principle, build out-of-source. The typical setup I have is : bzr clone lp:~mysql/mysql-server/5.7 mysql-trunkcd mysql-trunkmkdir bldcd bldcmake -DWITH_DEBUG=1 -DMYSQL_PROJECT_NAME=mysql-trunk ..devenv /build debug mysql-trunk.sln This has been tested to work on a 32 bit compile using VS2013 on a Windows7 64 bit build. Note that you'll need other things too (bison, eventually openssl etc) that I will assume you already have set up. Steps: Download Boost 1.55.0. It's the *only* version that is known to work currently. Extract boost_1_55_0/ from the zip to c:\boost\boost_1_55_0 Go to Control Panel/System/Environment variables and set WITH_BOOST=C:\boost\boost_1_55_0 in User variables. Make sure you restart your open command line terminal windows after this !  If you're upgrading from non-boost build, remove your bld/ directory and create a new one. run cmake as you'd typically do. You should get: -- Local boost dir C:/boost/boost_1_55_0 -- Local boost zip LOCAL_BOOST_ZIP-NOTFOUND -- BOOST_VERSION_NUMBER is #define BOOST_VERSION 105500 -- BOOST_INCLUDE_DIR C:/boost/boost_1_55_0 Build as normal (devenv /build debug ...). It should work.

    Read the article

  • In Windows 7 is there a way to login from any user account and see the same workspace and be able to use the running programs of another user?

    - by WickedMongoose
    Our group has a number of Test Stands with PCs that are currently being accessed with a single group login. It has been sent from on high that this is not the way to do things for security reasons and we all agree. However. Multiple team members from around the world log into these Test Stands and need to be able to access programs that have been run from what would be different user profiles if we were to no longer have a single common login. Is there a way to have a common workspace such that when different users login, they will be able to see and interact with all running applications as if they were using a common login? Applications that we run link to and monopolize hardware resources connected to the PC and it is time consuming to restart and reload settings every time a new user logs in. Even if the program did not monopolize the hardware many of these programs are resource intensive and require a large portion of each machine's RAM to run, so trying to run the application again when it is already running from multiple user accounts would quickly consume all system resources. Simple Example: I open a chrome browser while logged into our pc. I then logout and another team member remotes in and should be able to see my open browser and be able to interact with it as if he were the one who opened it. Any alternative process flows or solutions from someone who has gone through a similar transition would be appreciated. This is not a request for how to give all users access to the ability to run a program, but it is the request for how to allow all users access to interact with running applications that have been started by other users and need to be interacted with as if the new user started and has control of the application.

    Read the article

  • ???? Oracle11g ????????? No.2 - v$database.CURRENT_SCN

    - by Todd Bao
    «????Oracle 11g ???????»???????????,?11.2.0.3.0?????: select current_scn from v$database union all select current_scn from v$database; ??????????SCN,??????11.2.0.1.0???????????SCN?????? ??,????11.2.0.1.0????,11.2.0.3.0????X$KCCDI(V$DATABASE?????,??CURRENT_SCN??)??,?????????SCN? ----------------------------------------------------| Id  | Operation            | Name               |----------------------------------------------------|   0 | SELECT STATEMENT     |                    ||   1 |  MERGE JOIN CARTESIAN|                    ||*  2 |   FIXED TABLE FULL   | X$KCCDI            ||   3 |   BUFFER SORT        |                    ||   4 |    VIEW              | VW_JF_SET$6E0AEE5B ||   5 |     UNION-ALL        |                    ||   6 |      FIXED TABLE FULL| X$KCCDI2           ||   7 |      FIXED TABLE FULL| X$KCCDI2           |---------------------------------------------------- ??????11.2.0.3.0???????SQL??v$database????current_scn????????:???????X$KCCDI???dicur_scn(current_scn)??????? a. ???:????union all,???????,??????????X$KCCDI2(V$DATABASE??????)?VIEW????,??X$KCCDI?X$KCCDI2????,???X$KCCDI??,??: SYS@fmw//Scripts> run  1  select current_scn from v$database  2  union all select current_scn from v$database  3  union all select current_scn from v$database  4* union all select current_scn from v$databaseCURRENT_SCN-----------    5074384    5074385    5074385    50743854 rows selected. ??,X$KCCDI?????????,??????????SCN??????SCN????????“?”SCN? b. ???:???????,??: SYS@fmw//Scripts> run  1  select current_scn,status from v$database,v$instance  2  union all  3* select current_scn,status from v$database,v$instanceCURRENT_SCN + STATUS----------- + ------------------------    5075463 + OPEN    5075464 + OPEN2 rows selected. c. ???:?????????: SYS@fmw//Scripts> run  1* select a.current_scn,b.current_scn from v$database a,v$database bCURRENT_SCN + CURRENT_SCN----------- + -----------    5078328 +     50783291 row selected. ????UNION ALL?????? d. ??,???X$KCCDI??????????????????“??”??=D,????????X$?????????$???,???????,????V$DATABASE?????????????????: SYS@fmw//Scripts> run  1  select dicur_scn from x$kccdi  2* union all select dicur_scn from x$kccdiDICUR_SCN--------------------------------508218350821842 rows selected. SYS@fmw//Scripts> run  1* select a.dicur_scn,b.dicur_scn from x$kccdi a,x$kccdi bDICUR_SCN                        + DICUR_SCN-------------------------------- + --------------------------------5082913                          + 50829141 row selected. ??? Todd Bao ??,???????????,?????????SCN,????V$DATABASE.CURRENT_SCN?,???????“next scn”? ×??,???demo????11.2.0.3.???

    Read the article

  • Hibernate/Spring: failed to lazily initialize - no session or session was closed

    - by Niko
    I know something similar has been asked already, but unfortunately I wasn't able to find a reliable answer - even with searching for over 2 days. The basic problem is the same as asked multiple time. I have a simple program with two POJOs Event and User - where a user can have multiple events. @Entity @Table public class Event { private Long id; private String name; private User user; @Column @Id @GeneratedValue public Long getId() {return id;} public void setId(Long id) { this.id = id; } @Column public String getName() {return name;} public void setName(String name) {this.name = name;} @ManyToOne @JoinColumn(name="user_id") public User getUser() {return user;} public void setUser(User user) {this.user = user;} } @Entity @Table public class User { private Long id; private String name; private List events; @Column @Id @GeneratedValue public Long getId() { return id; } public void setId(Long id) { this.id = id; } @Column public String getName() { return name; } public void setName(String name) { this.name = name; } @OneToMany(mappedBy="user", fetch=FetchType.LAZY) public List getEvents() { return events; } public void setEvents(List events) { this.events = events; } } Note: This is a sample project. I really want to use Lazy fetching here. I use spring and hibernate and have a simple basic-db.xml for loading: <?xml version="1.0" encoding="UTF-8"? <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:aop="http://www.springframework.org/schema/aop" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd" <bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" scope="thread" <property name="driverClassName" value="com.mysql.jdbc.Driver" / <property name="url" value="jdbc:mysql://192.168.1.34:3306/hibernateTest" / <property name="username" value="root" / <property name="password" value="" / <aop:scoped-proxy/ </bean <bean class="org.springframework.beans.factory.config.CustomScopeConfigurer" <property name="scopes" <map <entry key="thread" <bean class="org.springframework.context.support.SimpleThreadScope" / </entry </map </property </bean <bean id="mySessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean" scope="thread" <property name="dataSource" ref="myDataSource" / <property name="annotatedClasses" <list <valuedata.model.User</value <valuedata.model.Event</value </list </property <property name="hibernateProperties" <props <prop key="hibernate.dialect"org.hibernate.dialect.MySQLDialect</prop <prop key="hibernate.show_sql"true</prop <prop key="hibernate.hbm2ddl.auto"create</prop </props </property <aop:scoped-proxy/ </bean <bean id="myUserDAO" class="data.dao.impl.UserDaoImpl" <property name="sessionFactory" ref="mySessionFactory" / </bean <bean id="myEventDAO" class="data.dao.impl.EventDaoImpl" <property name="sessionFactory" ref="mySessionFactory" / </bean </beans Note: I played around with the CustomScopeConfigurer and SimpleThreadScope, but that didnt change anything. I have a simple dao-impl (only pasting the userDao - the EventDao is pretty much the same - except with out the "listWith" function: public class UserDaoImpl implements UserDao{ private HibernateTemplate hibernateTemplate; public void setSessionFactory(SessionFactory sessionFactory) { this.hibernateTemplate = new HibernateTemplate(sessionFactory); } @SuppressWarnings("unchecked") @Override public List listUser() { return hibernateTemplate.find("from User"); } @Override public void saveUser(User user) { hibernateTemplate.saveOrUpdate(user); } @Override public List listUserWithEvent() { List users = hibernateTemplate.find("from User"); for (User user : users) { System.out.println("LIST : " + user.getName() + ":"); user.getEvents().size(); } return users; } } I am getting the org.hibernate.LazyInitializationException - failed to lazily initialize a collection of role: data.model.User.events, no session or session was closed at the line with user.getEvents().size(); And last but not least here is the Test class I use: public class HibernateTest { public static void main(String[] args) { ClassPathXmlApplicationContext ac = new ClassPathXmlApplicationContext("basic-db.xml"); UserDao udao = (UserDao) ac.getBean("myUserDAO"); EventDao edao = (EventDao) ac.getBean("myEventDAO"); System.out.println("New user..."); User user = new User(); user.setName("test"); Event event1 = new Event(); event1.setName("Birthday1"); event1.setUser(user); Event event2 = new Event(); event2.setName("Birthday2"); event2.setUser(user); udao.saveUser(user); edao.saveEvent(event1); edao.saveEvent(event2); List users = udao.listUserWithEvent(); System.out.println("Events for users"); for (User u : users) { System.out.println(u.getId() + ":" + u.getName() + " --"); for (Event e : u.getEvents()) { System.out.println("\t" + e.getId() + ":" + e.getName()); } } ((ConfigurableApplicationContext)ac).close(); } } and here is the Exception I get: 1621 [main] ERROR org.hibernate.LazyInitializationException - failed to lazily initialize a collection of role: data.model.User.events, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: data.model.User.events, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:119) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at data.dao.impl.UserDaoImpl.listUserWithEvent(UserDaoImpl.java:38) at HibernateTest.main(HibernateTest.java:44) Exception in thread "main" org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: data.model.User.events, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.readSize(AbstractPersistentCollection.java:119) at org.hibernate.collection.PersistentBag.size(PersistentBag.java:248) at data.dao.impl.UserDaoImpl.listUserWithEvent(UserDaoImpl.java:38) at HibernateTest.main(HibernateTest.java:44) Things I tried but did not work: assign a threadScope and using beanfactory (I used "request" or "thread" - no difference noticed): // scope stuff Scope threadScope = new SimpleThreadScope(); ConfigurableListableBeanFactory beanFactory = ac.getBeanFactory(); beanFactory.registerScope("request", threadScope); ac.refresh(); ... Setting up a transaction by getting the session object from the deo: ... Transaction tx = ((UserDaoImpl)udao).getSession().beginTransaction(); tx.begin(); users = udao.listUserWithEvent(); ... getting a transaction within the listUserWithEvent() public List listUserWithEvent() { SessionFactory sf = hibernateTemplate.getSessionFactory(); Session s = sf.openSession(); Transaction tx = s.beginTransaction(); tx.begin(); List users = hibernateTemplate.find("from User"); for (User user : users) { System.out.println("LIST : " + user.getName() + ":"); user.getEvents().size(); } tx.commit(); return users; } I am really out of ideas by now. Also, using the listUser or listEvent just work fine.

    Read the article

  • How can I add a link with a UID from a User Reference from a table in Drupal Views

    - by pduncan
    I have the following in Drupal 6: A Master CCK type which contains a User reference field and other fields. There will only be one record per user here. A View of this CCK, shown as a table, with one of the fields being the user ref from the CCK type. This field is initially shown as a user name, linking to the user profile. A Second CCK type which can have several pieces of data about a particular user. A View for this CCK type, displaying information as a table. It takes a user id as an argument (an integer) I want to click on the user name in the master view, and be directed to the detail view for this user. To do this, I tried selecting 'Output this field as a link' on the user field. The thing available for me to replace are: Fields * [field_my_user_ref_uid_1] == Content: User (field_my_user_ref) Arguments * %1 == User: Uid However, the [field_my_user_ref_uid_1] element is replaced by the user name, and %1 seems to get replaced with an empty string. How can I put the user id in here?

    Read the article

  • Easy way to observe user activity - how improve my database structure.

    - by Thomas
    Welcome, I need some advise to improve perfomence my web application. In the begin I had this structure of database: USER -id (Primary Key) -name -password -email .... PROFILE -user Primary Key, Foreign Key (USER) -birthday -region -photoFile ... PAGES -id (Primary Key) -user Foreign Key(USER) -page -date COMMENTS -id (Primary Key) -user Foreign Key(USER) -page Foreign Key(PAGE) -comment -date FAVOURITES_PAGES -id (Primary Key) -user Foreign Key(USER) -favourite_page Foreign Key(PAGE) -date but now one of the most important page of website is observatory, when everyone can observe activity others users. So I need select all pages, comments and favourites pages some users and display it in one list, sorted by date. For better perfomance (I think) I changed my structure to this: table USER and PROFILE without changes ACTIVITY (additional table- have common fields: user,date) -id (Primary Key) -user Foreign Key(USER) -date -page Foreign Key(PAGE) -comment Foreign Key(COMMENTS) -favourite_page Foreign Key(FAVOURITES_PAGES) PAGES -id (Primary Key) -page COMMENTS -id (Primary Key) -page Foreign Key(PAGE) -comment FAVOURITES_PAGES -id (Primary Key) -favourite_page Foreign Key(PAGE) So now it is very easy get sorted records from all tables. But I have no only foreign key to PAGES, COMMENTS and FAVOURITES_PAGES in ACTIVITY table - there is about ten Foreign Key fields and in one record only one have value, others have None: ACTIVITY id user date page comment ... 1 2 2010-02-23 None 1 2 1 2010-02-21 1 None .... It is corect solution. When I display about 40 records in one page (pagination) I must wait about one secound, but database is almost emty (a few users and about 100 records in others tables). It is depends on amount records per page - I have checked it, but why it takes too long time, becouse of relationships? The website is built in Python/Django. Any advices/opinion?

    Read the article

  • How do I explain this to potential employers?

    - by ReferencelessBob
    Backstory: TL;DR: I've gained a lot of experience working for 5 years at one startup company, but it eventually failed. The company is gone and the owner MIA. When I left sixth-form college I didn't want to start a degree straight away, so when I met this guy who knew a guy who was setting up a publishing company and needed a 'Techie' I thought why not. It was a very small operation, he sent mailings to schools, waited for orders to start arriving, then ordered a short run of the textbooks to be printed, stuck them in an envelope posted them out. I was initially going to help him set up a computerized system for recording orders and payments, printing labels, really basic stuff and I threw it together in Access in a couple of weeks. He also wanted to start taking orders online, so I set up a website and a paypal business account. While I was doing this, I was also helping to do the day-to-day running of things, taking phone orders, posting products, banking cheques, ordering textbooks, designing mailings, filing end of year accounts, hiring extra staff, putting stamps on envelopes. I learned so much about things I didn't even know I needed to learn about. Things were pretty good, when I started we sold about £10,000 worth of textbooks and by my 4th year there we sold £250,000 worth of text books. Things were looking good, but we had a problem. Our best selling product had peaked and sales started to fall sharply, we introduced add on products through the website to boost sales which helped for a while, but we had simply saturated the market. Our plan was to enter the US with our star product and follow the same, slightly modified, plan as before. We setup a 1-866 number and had the calls forwarded to our UK offices. We contracted a fulfillment company, shipped over a few thousand textbooks, had a mailing printed and mailed, then sat by the phones and waited. Needless to say, it didn't work. We tried a few other things, at home and in the US, but nothing helped. We expanded in the good times, moving into bigger offices, taking on staff to do administrative and dispatch work, but now cashflow was becoming a problem and things got tougher. We did the only thing we could and scaled things right back, the offices went, the admin staff went, I stopped taking a wage and started working from home. Nothing helped. The business was wound up about about 2 years ago. In the end it turned out that the owner had built up considerable debt at the start of business and had not paid them off during good years, which left him in a difficult position when cashflow had started to dry up. I haven't been able to contact the owner since I found out. It took me a while to get back on my feet after that, but I'm now at University and doing a Computer Science degree. How do I show the experience I have without having to get into all the gory details of what happened?

    Read the article

  • Restricting Access to Application(s) on Point of Sale system

    - by BSchlinker
    I have a customer with two point of sale systems, a few workstations and a Windows 2003 SBS Server. The point of sale systems are typically running QuickBooks Point of Sale and are logged in with a user who has restricted permissions / access (via Group Policy). Occasionally, one of the managers needs to be able to run a few additional applications -- including some accounting software. I have created an additional user for this manager, allowing them to login and access the accounting software. The problem is, it can be problematic to switch users on the system, as QuickBooks takes a few minutes to close (on POSUser) and then reopen (on ManagerUser). If customers are waiting, this slows things down drastically. Since the accounting software is stored on a network drive, it would be easiest if the manager could simply double click something, authenticate against the network drive / domain controller and then the program would launch. When they close the program, the session to the network drive would be lost and the program would no longer be accessible. Is there any easy way to do this? Both users are on a domain and the system is Windows 7. I just don't want to require the user to switch back and forth. In a worst case scenario, they forget to switch back and leave the accounting software wide open.

    Read the article

  • Intermittent FTP login issues (Microsoft IIS FTP Service)

    - by JaggenSWE
    I've got a somewhat weird problem which I'm not sure how to troubleshoot. We have a FTP running on a Windows Server 2003 machine using the IIS FTP Service, this is for our clients and is configured with IP-restrictions. However, now ONE of the clients starts complaining that they can't log in to the server from time to time. This is just ONE of 10+ clients that have this issue, which makes me think it's a problem on their side. Just to be on the safe side I had a peek into the FTP logs and found something strange. Whenever succeed in loggin in this is what I can find in the logs: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191747]USER, userxxx, -, nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:22:32, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 230, 0, [191747]PASS, -, -, However, if the login fails I see the following events: nnn.nnn.nnn.70, userxxx, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 331, 0, [191739]USER, userxxx, -, nnn.nnn.nnn.70, -, 2012-06-11, 09:16:33, MSFTPSVC1, SERVERNAME, nnn.nn.nn.11, 0, 0, 0, 530, 1326, [191739]PASS, -, -, When you look at the event where the clients sends the PASS in the successful login it seems to know that it is infact "userxxx" that is coupled to that PASS, but when it fails it seems to be lost since user in the PASS event is set to "-". Anyone have any ideas around this, any help would be appreciated. :) //JaggenSWE

    Read the article

  • How to inactive Active Directory users, 1 month after their FIRST LOGIN, instead of defining a solid expiration date

    - by smhnaji
    We want to give access to some Active Directory users, so they can remotely have access to our server and download from a special folder of the server. The licenses we give to users, are time base. There should be 1 month, 2 month, ..., 1 year, ... licenses. CURRENT SITUATION (WHAT I DON'T WANT): When users are created and added to the OS, a solid expiration date is given. WHAT I WANT: Users' expiration date should be calculated automatically after the first login. The user might not need his account right when purchases the license. In other words: When a license of the user we create is purchased at Jan 1st, he should use the license until Feb 1st. No matter whether he really logs in or not. He cannot come Feb 5th and begin using his license because that has expired then. What I want is that when he comes at Feb 5th and begins using, the license update until March 5th. Working environment is Windows Server 2012. By the word 'user', I mean Active Directory Users.

    Read the article

  • Howto switch / chage user id witin a bash script to execute commands in the same script?

    - by a1an
    Is there a way to switch user identity within a script (executed as root as part of an installation process) to execute some commands without calling an external sctipt, then be back to root to run other commands? Sort of: #!/bin/bash some commands as root SWITCH_USER_TO user some commands as user including environment variables checks, without calling an external script SWITCH_USER_BACK some other stuff as root, maybe another user id change...

    Read the article

  • How to map a user to a domain, with Usermin and Postfix?

    - by HappyDeveloper
    What I need is to create a couple of websites, each one with only one user, which is allowed to send mails from any address @ his own domain. Example: Site foo.com would have an user foo, which can send mails from [email protected]. Currently, when the user logins to Usermin, the default 'from' field is example-dns.net (editable) I want it to show an editable field for the user part, and @foo.com (non editable) So how can I do this? Is there some way to automate this?

    Read the article

  • How do I add categories from another user in outllook 2003 to my list of categories in 2007?

    - by Ernst
    Hi, I'm sharing a contact list with another user on the network, but I'm using outlook 2007 and the other user is using outlook 2003. The other user has assigned many different categories, but I do not get those categories added to the list of categories I can choose from when adding/editing contacts, I can see certain contacts have those categories. How do I add those categories so I can also add them? The shared contacts originate from the other user. Thanks

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Maintaining Revision Levels

    - by kyle.hatlestad
    A question that came up on an earlier blog post was how to limit the number of revisions on a piece of content. UCM does not inherently enforce any sort of limit on how many revisions you can have. It's unlimited. In some cases, there may be content that goes through lots of changes, but there just simply isn't a need to keep all of its revisions around. Deleting those revisions through the content information screen can be very cumbersome. And going through the Repository Manager applet can take time as well to filter and find the revisions to get rid of. But there is an easier way through the Archiver. The Export Query criteria in Archiver includes a very handy field called 'Revision Rank'. With revision labels, they typically go up as new revisions come in (e.g. 1, 2, 3, 4, etc...). But you can't really use this field to tell it to keep the top 5 revisions. Those top 5 revision numbers are always going up. But revision rank goes the opposite direction. The very latest revision is always 0. The previous revision to that is 1. Previous revision to that is 2. And so on and so forth. With revision rank, you can set your query to look for any Revision Rank greater or equal to 5. Now as older revisions move down the line, their revision rank gets higher and higher until they reach that threshold. Then when you run that archive export, you can choose to delete and remove those revisions. Running that export in Archiver is normally a manual process. But with Idc Command, you can script the process and have it run automatically from the server. Idc Command is a utility that allows you to run any of the content server services via the command line. You basically feed it a text file with the services and parameters defined along with the user to run it as. The Idc Command executable is located within the \bin\ directory: $ ./IdcCommand -f DeleteOlderRevisions.txt -u sysadmin -l delete_revisions.log In this example, our IdcCommand file to run the export and do the deletions would look like: IdcService=EXPORT_ARCHIVE aArchiveName=DeleteOlderRevisions aDoDelete=1 IDC_Name=idc dataSource=RevisionIDs <<EOD>> You can then use automated scheduling routines in the OS to run the command and command file at the frequency needed. Remember that you are deleting the revisions from within UCM, but they are still getting placed within the archive. So you will need to delete those batches to have them fully removed (or re-import if you need to recover them). For more information about Idc Command, you can find that in the Idc Command Reference Guide.

    Read the article

  • Asynchronous connectToServer

    - by Pavel Bucek
    Users of JSR-356 – Java API for WebSocket are probably familiar with WebSocketContainer#connectToServer method. This article will be about its usage and improvement which was introduce in recent Tyrus release. WebSocketContainer#connectToServer does what is says, it connects to WebSocketServerEndpoint deployed on some compliant container. It has two or three parameters (depends on which representation of client endpoint are you providing) and returns aSession. Returned Session represents WebSocket connection and you are instantly able to send messages, register MessageHandlers, etc. An issue might appear when you are trying to create responsive user interface and use this method – its execution blocks until Session is created which usually means some container needs to be started, DNS queried, connection created (it’s even more complicated when there is some proxy on the way), etc., so nothing which might be really considered as responsive. Trivial and correct solution is to do this in another thread and monitor the result, but.. why should users do that? :-) Tyrus now provides async* versions of all connectToServer methods, which performs only simple (=fast) check in the same thread and then fires a new one and performs all other tasks there. Return type of these methods is Future<Session>. List of added methods: public Future<Session> asyncConnectToServer(Class<?> annotatedEndpointClass, URI path) public Future<Session> asyncConnectToServer(Class<? extends Endpoint>  endpointClass, ClientEndpointConfig cec, URI path) public Future<Session> asyncConnectToServer(Endpoint endpointInstance, ClientEndpointConfig cec, URI path) public Future<Session> asyncConnectToServer(Object obj, URI path) As you can see, all connectToServer variants have its async* alternative. All these methods do throw DeploymentException, same as synchronous variants, but some of these errors cannot be thrown as a result of the first method call, so you might get it as the cause ofExecutionException thrown when Future<Session>.get() is called. Please let us know if you find these newly added methods useful or if you would like to change something (signature, functionality, …) – you can send us a comment to [email protected] or ping me personally. Related links: https://tyrus.java.net https://java.net/jira/browse/TYRUS/ https://github.com/tyrus-project/tyrus

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >