Search Results

Search found 30828 results on 1234 pages for 'information rights manage'.

Page 6/1234 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Display System Information on Your Desktop with Desktop Info

    - by Asian Angel
    Do you like to monitor your system but do not want a complicated app to do it with? If you love simplicity and easy configuration then join us as we look at Desktop Info. Desktop Info in Action Desktop Info comes in a zip file format so you will need to unzip the app, place it into an appropriate “Program Files Folder”, and create a shortcut. Do NOT delete the “Read Me File”…this will be extremely useful to you when you make changes to the “Configuration File”. Once you have everything set up you are ready to start Desktop Info up. This is the default layout and set of listings displayed when you start Desktop Info up for the first time. The font colors will be a mix of colors as seen here and the font size will perhaps be a bit small but those are very easy to change if desired. You can access the “Context Menu” directly over the “information area”…so no need to look for it in the “System Tray”. Notice that you can easily access that important “Read Me File” from here… The full contents of the configuration file (.ini file) are displayed here so that you can see exactly what kind of information can be displayed using the default listings. The first section is “Options”…you will most likely want to increase the font size while you are here. Then “Items”… If you are unhappy with any of the font colors in the “information area” this is where you can make the changes. You can turn information display items on or off here. And finally “Files, Registry, & Event Logs”. Here is our displayed information after a few tweaks in the configuration file. Very nice. Conclusion If you have been looking for a system information app that is simple and easy to set up then you should definitely give Desktop Info a try. Links Download Desktop Info Similar Articles Productive Geek Tips Ask the Readers: What are Your Computer’s Hardware Specs?Allow Remote Control To Your Desktop On UbuntuHow To Get Detailed Information About Your PCGet CPU / System Load Average on Ubuntu LinuxEnable Remote Desktop (VNC) on Kubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7

    Read the article

  • Joy! | Important Information About Your iPad 3G

    - by Jeff Julian
    Looks like I was one of the lucky 114,000 who AT&T lost their email to “hackers”.  Why is “hackers” in “double quotes”.  I can just imagine some executive at AT&T in their “Oh No, We Messed Up Meeting” saying, what happened?  Then someone replied, well we have had a breach and “hackers” broke in (using the quote in the air gesture) and stole our iPad 3G customers emails. Oh well, I am sure my email has been sold and sold again by many different vendors, why not AT&T now.  At least Dorothy Attwood could have gave us her email to give to someone else instead of blinking it through a newsletter system. June 13, 2010 Dear Valued AT&T Customer, Recently there was an issue that affected some of our customers with AT&T 3G service for iPad resulting in the release of their customer email addresses. I am writing to let you know that no other information was exposed and the matter has been resolved.  We apologize for the incident and any inconvenience it may have caused. Rest assured, you can continue to use your AT&T 3G service on your iPad with confidence. Here’s some additional detail: On June 7 we learned that unauthorized computer “hackers” maliciously exploited a function designed to make your iPad log-in process faster by pre-populating an AT&T authentication page with the email address you used to register your iPad for 3G service.  The self-described hackers wrote software code to randomly generate numbers that mimicked serial numbers of the AT&T SIM card for iPad – called the integrated circuit card identification (ICC-ID) – and repeatedly queried an AT&T web address.   When a number generated by the hackers matched an actual ICC-ID, the authentication page log-in screen was returned to the hackers with the email address associated with the ICC-ID already populated on the log-in screen. The hackers deliberately went to great efforts with a random program to extract possible ICC-IDs and capture customer email addresses.  They then put together a list of these emails and distributed it for their own publicity. As soon as we became aware of this situation, we took swift action to prevent any further unauthorized exposure of customer email addresses.  Within hours, AT&T disabled the mechanism that automatically populated the email address. Now, the authentication page log-in screen requires the user to enter both their email address and their password. I want to assure you that the email address and ICC-ID were the only information that was accessible. Your password, account information, the contents of your email, and any other personal information were never at risk.  The hackers never had access to AT&T communications or data networks, or your iPad.  AT&T 3G service for other mobile devices was not affected. While the attack was limited to email address and ICC-ID data, we encourage you to be alert to scams that could attempt to use this information to obtain other data or send you unwanted email. You can learn more about phishing by visiting the AT&T website. AT&T takes your privacy seriously and does not tolerate unauthorized access to its customers’ information or company websites.   We will cooperate with law enforcement in any investigation of unauthorized system access and to prosecute violators to the fullest extent of the law. AT&T acted quickly to protect your information – and we promise to keep working around the clock to keep your information safe.  Thank you very much for your understanding, and for being an AT&T customer. Sincerely, Dorothy Attwood Senior Vice President, Public Policy and Chief Privacy Officer for AT&T Technorati Tags: AT&T,iPad 3G,Email

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • New Version 3.1 Endeca Information Discovery Now Available

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Business User Self-Service Data Mash-up Analysis and Discovery integrated with OBI11g and Hadoop Oracle Endeca Information Discovery 3.1 (OEID) is a major release that incorporates significant new self-service discovery capabilities for business users, including agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI.  · Self-Service Data Mashup and Discovery Dashboards: business users can combine information from multiple sources, including their own up-loaded spreadsheets, to conduct analysis on the complete set.  Creating discovery dashboards has been made even easier by intuitive drag-and drop layouts and wizard-based configuration.  Business users can now build new discovery applications in minutes, without depending on IT. · Enhanced Integration with Oracle BI: OEID 3.1 enhances its’ native integration with Oracle Business Intelligence Foundation. Business users can now incorporate information from trusted BI warehouses, leveraging dimensions and attributes defined in Oracle’s Common Enterprise Information Model, but evolve them based on the varying day-to-day demands and requirements that they personally manage. · Deep Unstructured Analysis: business users can gain new insights from a wide variety of enterprise and public sources, helping companies to build an actionable Big Data strategy.  With OEID’s long-standing differentiation in correlating unstructured information with structured data, business users can now perform their own text mining to identify hidden concepts, without having to request support from IT. They can augment these insights with best in class keyword search and pattern matching, all in the context of rich, interactive visualizations and analytic summaries. · Enterprise-Class Self-Service Discovery:  OEID 3.1 enables IT to provide a powerful self-service platform to the business as part of a broader Business Analytics strategy, preserving the value of existing investments in data quality, governance, and security.  Business users can take advantage of IT-curated information to drive discovery across high volumes and varieties of data, and share insights with colleagues at a moment’s notice. · Harvest Content from the Web with the Endeca Web Acquisition Toolkit:  Oracle now provides best-of-breed data access to website content through the Oracle Endeca Web Acquisition Toolkit.  This provides an agile, graphical interface for developers to rapidly access and integrate any information exposed through a web front-end.  Organizations can now cost-effectively include content from consumer sites, industry forums, government or supplier portals, cloud applications, and myriad other web sources as part of their overall strategy for data discovery and unstructured analytics. For more information: OEID 3.1 OTN Software and Documentation Download And Endeca available for download on Software Delivery Cloud (eDelivery) New OEID 3.1 Videos on YouTube Oracle.com Endeca Site /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • On checking kernel parms HP-UX & Oracle 10.2

    - by [email protected]
    Hi,Just did this little script for a customer wanting to investigate if the kernel parameters of their HP-UX machines fits or not the Oracle 10.2 specifications ( looks like someone checked the "User Verified" at database installation time on some prerequisites)Just want to share ( its in Spanish)Hope it helps!--L Normal 0 21 false false false MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} ### ### check_kernel_parms.sh ### ### ### ###    NAME ###      check_kernel_parms.sh ### ###    DESCRIPTION ###      Comprueba los parametros de kernel ### ###    NOTES ###     Valido para versiones de SO HP-UX y Oracle 10.2 ###     ### ###    MODIFIED   (MM/DD/YY) ###    lvelasco    05/20/10 - Creacion     V_DATE=`/usr/bin/date +%Y%m%d_%H%M%S` if [ "${1}" != 'sin_traza' ]; then   V_FICHERO_LOG=`dirname $0`/`basename $0`.${V_DATE}.log   exec 4>&1   tee ${V_FICHERO_LOG} >&4 |&   exec 1>&p 2>&1 fi   echo "${V_DATE}- check_params.sh *******************************************" echo "Comenzado ejecucion de chequeo de parametros de kernel en maquina `hostname`" echo "****************************************************************************"   UX_VERS=`uname -a | awk '{print $3}'` KCTUNE=/usr/sbin/kctune MEM_TOTAL_MB=`./check_mem` <-- This should return the physical mem of the machine on bytes MEM_TOTAL=$((MEM_TOTAL_MB*1024))     NPROC=$(${KCTUNE} | grep nproc | grep -v '(nproc' | grep -v *nproc | awk '{print $2}') if [ ${NPROC} -lt 4096 ] then        echo "[FAILED] Information: El parametro nproc con valor actual ${NPROC} debe tener una valor superior a 4096" else        echo "[OK] Information: El parametro nproc con valor actual ${NPROC} debe tener una valor superior a 4096" fi     KSI_ALLOC_MAX_T=$(${KCTUNE} | grep ksi_alloc | awk '{print $2}') KSI_ALLOC_MAX=$((NPROC*8)) if [ ${KSI_ALLOC_MAX_T} -lt ${KSI_ALLOC_MAX} ] then        echo "[FAILED] Information: El parametro ksi_alloc_max con valor actual ${KSI_ALLOC_MAX_T} debe tener una valor superior a ${KSI_ALLOC_MAX}" else        echo "[OK] Information: El parametro ksi_alloc_max con valor actual ${KSI_ALLOC_MAX_T} debe tener una valor superior a ${KSI_ALLOC_MAX}" fi       EXECUTABLE_STACK=$(${KCTUNE} | grep executable_stack | awk '{print $2}') if [ ${EXECUTABLE_STACK} -ne 0 ] then        echo "[FAILED] Information: El parametro executable_stack con valor actual ${EXECUTABLE_STACK} debe tener una valor igual a 0" else        echo "[OK] Information: El parametro executable_stack con valor actual ${EXECUTABLE_STACK} debe tener una valor igual a 0" fi       MAX_THREAD_PROC=$(${KCTUNE} | grep max_thread_proc | awk '{print $2}') if [ ${MAX_THREAD_PROC} -lt 1024 ] then        echo "[FAILED] Information: El parametro max_thread_proc con valor actual ${MAX_THREAD_PROC} debe tener una valor superior a 1024" else        echo "[OK] Information: El parametro max_thread_proc con valor actual ${MAX_THREAD_PROC} debe tener una valor superior a 1024" fi     MAXDSIZ=$(${KCTUNE} | grep 'maxdsiz ' | awk '{print $2}') if [ ${MAXDSIZ} -lt 1073741824 ] then        echo "[FAILED] Information: El parametro maxdsiz con valor actual ${MAXDSIZ} debe tener una valor superior a 1073741824" else        echo "[OK] Information: El parametro maxdsiz con valor actual ${MAXDSIZ} debe tener una valor superior a 1073741824" fi     MAXDSIZ_64BIT=$(${KCTUNE} | grep maxdsiz_64bit | awk '{print $2}') if [ ${MAXDSIZ} -lt 2147483648 ] then        echo "[FAILED] Information: El parametro maxdsiz_64bit con valor actual ${MAXDSIZ_64BIT} debe tener una valor superior a 2147483648" else        echo "[OK] Information: El parametro maxdsiz_64bit con valor actual ${MAXDSIZ_64BIT} debe tener una valor superior a 2147483648" fi   MAXSSIZ=$(${KCTUNE} | grep 'maxssiz ' | awk '{print $2}') if [ ${MAXSSIZ} -lt 134217728 ] then        echo "[FAILED] Information: El parametro maxssiz con valor actual ${MAXSSIZ} debe tener una valor superior a 134217728" else        echo "[OK] Information: El parametro maxssiz con valor actual ${MAXSSIZ} debe tener una valor superior a 134217728" fi     MAXSSIZ_64BIT=$(${KCTUNE} | grep maxssiz_64bit | awk '{print $2}') if [ ${MAXSSIZ} -lt 1073741824 ] then        echo "[FAILED] Information: El parametro maxssiz_64bit con valor actual ${MAXSSIZ} debe tener una valor superior a 1073741824" else        echo "[OK] Information: El parametro maxssiz_64bit con valor actual ${MAXSSIZ} debe tener una valor superior a 1073741824" fi     MAXUPRC_T=$(${KCTUNE} | grep maxuprc | awk '{print $2}') MAXUPRC=$(((NPROC*9)/10)) if [ ${MAXUPRC_T} -lt ${MAXUPRC} ] then        echo "[FAILED] Information: El parametro maxuprc con valor actual ${MAXUPRC_T} debe tener una valor superior a ${MAXUPRC}" else        echo "[OK] Information: El parametro maxuprc con valor actual ${MAXUPRC_T} debe tener una valor superior a ${MAXUPRC}" fi   MSGMNI=$(${KCTUNE} | grep msgmni | awk '{print $2}') if [ ${MSGMNI} -lt ${NPROC} ] then        echo "[FAILED] Information: El parametro msgni con valor actual ${MSGMNI} debe tener una valor superior a ${NPROC}" else        echo "[OK] Information: El parametro msgni con valor actual ${MSGMNI} debe tener una valor superior a ${NPROC}" fi     if [ ${UX_VERS} = "B.11.23" ] then        MSGSEG=$(${KCTUNE} | grep msgseg | awk '{print $2}')        if [ ${MSGSEG} -lt 32767 ]        then        echo "[FAILED] Information: El parametro msgseg con valor actual ${MSGSEG} debe tener una valor superior a 32767"        else        echo "[OK] Information: El parametro msgseg con valor actual ${MSGSEG} debe tener una valor superior a 32767"        fi fi   MSGTQL=$(${KCTUNE} | grep msgtql | awk '{print $2}') if [ ${MSGTQL} -lt ${NPROC} ] then        echo "[FAILED] Information: El parametro msgtql con valor actual ${MSGTQL} debe tener una valor superior a ${NPROC}" else        echo "[OK] Information: El parametro msgtql con valor actual ${MSGTQL} debe tener una valor superior a ${NPROC}" fi       if [ ${UX_VERS} = "B.11.23" ] then        NFILE_T=$(${KCTUNE} | grep nfile | awk '{print $2}')        NFILE=$((15*NPROC+2048))        if [ ${NFILE_T} -lt ${NFILE} ]        then        echo "[FAILED] Information: El parametro nfile con valor actual ${NFILE_T} debe tener una valor superior a ${NFILE}"        else        echo "[OK] Information: El parametro nfile con valor actual ${NFILE_T} debe tener una valor superior a ${NFILE}"        fi fi     NFLOCKS=$(${KCTUNE} | grep nflocks | awk '{print $2}') if [ ${NFLOCKS} -lt ${NPROC} ] then        echo "[FAILED] Information: El parametro nflocks con valor actual ${NFLOCKS} debe tener una valor superior a ${NPROC}" else        echo "[OK] Information: El parametro nflocks con valor actual ${NFLOCKS} debe tener una valor superior a ${NPROC}" fi   NINODE_T=$(${KCTUNE} | grep ninode | grep -v vx | awk '{print $2}') NINODE=$((8*NPROC+2048)) if [ ${NINODE_T} -lt ${NINODE} ] then        echo "[FAILED] Information: El parametro ninode con valor actual ${NINODE_T} debe tener una valor superior a ${NINODE}" else        echo "[OK] Information: El parametro ninode con valor actual ${NINODE_T} debe tener una valor superior a ${NINODE}" fi     NCSIZE_T=$(${KCTUNE} | grep ncsize | awk '{print $2}') NCSIZE=$((NINODE+1024)) if [ ${NCSIZE_T} -lt ${NCSIZE} ] then        echo "[FAILED] Information: El parametro ncsize con valor actual ${NCSIZE_T} debe tener una valor superior a ${NCSIZE}" else        echo "[OK] Information: El parametro ncsize con valor actual ${NCSIZE_T} debe tener una valor superior a ${NCSIZE}" fi     if [ ${UX_VERS} = "B.11.23" ] then        MSGMAP_T=$(${KCTUNE} | grep msgmap | awk '{print $2}')        MSGMAP=$((MSGTQL+2))        if [ ${MSGMAP_T} -lt ${MSGMAP} ]        then        echo "[FAILED] Information: El parametro msgmap con valor actual ${MSGMAP_T} debe tener una valor superior a ${MSGMAP}"        else        echo "[OK] Information: El parametro msgmap con valor actual ${MSGMAP_T} debe tener una valor superior a ${MSGMAP}"        fi fi   NKTHREAD_T=$(${KCTUNE} | grep nkthread | awk '{print $2}') NKTHREAD=$((((NPROC*7)/4)+16)) if [ ${NKTHREAD_T} -lt ${NKTHREAD} ] then        echo "[FAILED] Information: El parametro nkthread con valor actual ${NKTHREAD_T} debe tener una valor superior a ${NKTHREAD}" else        echo "[OK] Information: El parametro nkthread con valor actual ${NKTHREAD_T} debe tener una valor superior a ${NKTHREAD}" fi     SEMMNI=$(${KCTUNE} | grep semmni | grep -v '(semm' | awk '{print $2}') if [ ${SEMMNI} -lt ${NPROC} ] then        echo "[FAILED] Information: El parametro semmni con valor actual ${SEMMNI} debe tener una valor superior a ${NPROC}" else        echo "[OK] Information: El parametro semmni con valor actual ${SEMMNI} debe tener una valor superior a ${NPROC}" fi     SEMMNS_T=$(${KCTUNE} | grep semmns | awk '{print $2}') SEMMNS=$((SEMMNI+2)) if [ ${SEMMNS_T} -lt ${SEMMNI} ] then        echo "[FAILED] Information: El parametro semmns con valor actual ${SEMMNS_T} debe tener una valor superior a ${SEMMNS}" else        echo "[OK] Information: El parametro semmns con valor actual ${SEMMNS_T} debe tener una valor superior a ${SEMMNS}" fi   SEMMNU_T=$(${KCTUNE} | grep semmnu | awk '{print $2}') SEMMNU=$((NPROC-4)) if [ ${SEMMNU_T} -lt ${SEMMNU} ] then        echo "[FAILED] Information: El parametro semmnu con valor actual ${SEMMNU_T} debe tener una valor superior a ${SEMMNU}" else        echo "[OK] Information: El parametro semmnu con valor actual ${SEMMNU_T} debe tener una valor superior a ${SEMMNU}" fi     if [ ${UX_VERS} = "B.11.23" ] then        SEMVMX=$(${KCTUNE} | grep msgseg | awk '{print $2}')        if [ ${SEMVMX} -lt 32767 ]        then        echo "[FAILED] Information: El parametro semvmx con valor actual ${SEMVMX} debe tener una valor superior a 32767"        else        echo "[OK] Information: El parametro semvmx con valor actual ${SEMVMX} debe tener una valor superior a 32767"        fi fi     SHMMNI=$(${KCTUNE} | grep shmmni | awk '{print $2}') if [ ${SHMMNI} -lt 512 ] then        echo "[FAILED] Information: El parametro shmmni con valor actual ${SHMMNI} debe tener una valor superior a 512" else        echo "[OK] Information: El parametro shmmni con valor actual ${SHMMNI} debe tener una valor superior a 512" fi     SHMSEG=$(${KCTUNE} | grep shmseg | awk '{print $2}') if [ ${SHMSEG} -lt 120 ] then        echo "[FAILED] Information: El parametro shmseg con valor actual ${SHMSEG} debe tener una valor superior a 120" else        echo "[OK] Information: El parametro shmseg con valor actual ${SHMSEG} debe tener una valor superior a 120" fi   VPS_CEILING=$(${KCTUNE} | grep vps_ceiling | awk '{print $2}') if [ ${VPS_CEILING} -lt 64 ] then        echo "[FAILED] Information: El parametro vps_ceiling con valor actual ${VPS_CEILING} debe tener una valor superior a 64" else        echo "[OK] Information: El parametro vps_ceiling con valor actual ${VPS_CEILING} debe tener una valor superior a 64" fi   SHMMAX=$(${KCTUNE} | grep shmmax | awk '{print $2}') if [ ${SHMMAX} -lt ${MEM_TOTAL} ] then         echo "[FAILED] Information: El parametro shmmax con valor actual ${SHMMAX} debe tener una valor superior a ${MEM_TOTAL}" else         echo "[OK] Information: El parametro shmmax con valor actual ${SHMMAX} debe tener una valor superior a ${MEM_TOTAL}" fi exit 0  

    Read the article

  • Overloading stream insertion without violating information hiding?

    - by Chris
    I'm using yaml-cpp for a project. I want to overload the << and >> operators for some classes, but I'm having an issue grappling with how to "properly" do this. Take the Note class, for example. It's fairly boring: class Note { public: // constructors Note( void ); ~Note( void ); // public accessor methods void number( const unsigned long& number ) { _number = number; } unsigned long number( void ) const { return _number; } void author( const unsigned long& author ) { _author = author; } unsigned long author( void ) const { return _author; } void subject( const std::string& subject ) { _subject = subject; } std::string subject( void ) const { return _subject; } void body( const std::string& body ) { _body = body; } std::string body( void ) const { return _body; } private: unsigned long _number; unsigned long _author; std::string _subject; std::string _body; }; The << operator is easy sauce. In the .h: YAML::Emitter& operator << ( YAML::Emitter& out, const Note& v ); And in the .cpp: YAML::Emitter& operator << ( YAML::Emitter& out, const Note& v ) { out << v.number() << v.author() << v.subject() << v.body(); return out; } No sweat. Then I go to declare the >> operator. In the .h: void operator >> ( const YAML::Node& node, Note& note ); But in the .cpp I get: void operator >> ( const YAML::Node& node, Note& note ) { node[0] >> ? node[1] >> ? node[2] >> ? node[3] >> ? return; } If I write things like node[0] >> v._number; then I would need to change the CV-qualifier to make all of the Note fields public (which defeats everything I was taught (by professors, books, and experience))) about data hiding. I feel like doing node[0] >> temp0; v.number( temp0 ); all over the place is not only tedious, error-prone, and ugly, but rather wasteful (what with the extra copies). Then I got wise: I attempted to move these two operators into the Note class itself, and declare them as friends, but the compiler (GCC 4.4) didn't like that: src/note.h:44: error: ‘YAML::Emitter& Note::operator<<(YAML::Emitter&, const Note&)’ must take exactly one argument src/note.h:45: error: ‘void Note::operator(const YAML::Node&, Note&)’ must take exactly one argument Question: How do I "properly" overload the >> operator for a class Without violating the information hiding principle? Without excessive copying?

    Read the article

  • Block access to specific applications

    - by Jason Aren
    I would like to give several users rights to a computer running Ubuntu to do most administrative functions such as add/remove programs, save files, make settings changes, etc. However, I would like to block them from using several specific applications. Is this possible, and how would I do so? To provide a bit more detail: I am trying to set up Gnome Nanny to block adult websites from my kids' computer. I'd like to give them full access to the computer ACCEPT for Gnome Nanny. Windows has a program called K9 that cannot be turned off or uninstalled unless the user has the password EVEN if the user is an admin. Sounds like this isn't available on Ubuntu without a rather involved process of setting permissions on a large list of applications and functions to mimic admin rights.

    Read the article

  • Get Information to Your Blog with Microsoft Broadcaster

    - by Matthew Guay
    Do you often have people ask you for advice about technology, or do you write tech-focused blog or newsletter?  Here’s how you can get information to share with your readers about Microsoft technology with Microsoft Broadcaster. Microsoft Broadcaster is a new service from Microsoft to help publishers, bloggers, developers, and other IT professionals find relevant information and resources from Microsoft.  You can use it to help discover things to write about, or simply discover new information about the technology you use.  Broadcaster will also notify you when new resources are available about the topics that interest you.  Let’s look at how you could use this to expand your blog and help your users. Getting Started Head over to the Microsoft Broadcaster site (link below), and click Join to get started. Sign in with your Windows Live ID, or create a new account if you don’t already have one. Near the bottom of the page, add information about your blog, newsletter, or group that you want to share Broadcaster information with.  Click Add when you’re done entering information.  You can enter as many sites or groups as you wish. When you’ve entered all of your information, click the Apply button at the bottom of the page.  Broadcaster will then let you know your information has been submitted, but you’ll need to wait several days to see if you are approved or not. Our application was approved about 2 days after applying, though this may vary.  When you’re approved, you’ll receive an email letting you know.  Return to the Broadcaster website (link below), but this time, click Sign in. Accept the terms of use by clicking I Accept at the bottom of the page. Confirm that your information entered previously is correct, and then click Configure my keywords at the bottom of the page. Now you can pick the topics you want to stay informed about.  Type keywords in the textbox, and it will bring up relevant topics with IntelliSense. Here we’ve added several topics to keep up with. Next select the Microsoft Products you want to keep track of.  If the product you want to keep track of is not listed, make sure to list it in the keywords section as above. Finally, select the types of content you wish to see, including articles, eBooks, webcasts, and more. Finally, when everything’s entered, click Configure My Alerts at the bottom of the page. Broadcaster can automatically email you when new content is found.  If you would like this, click Subscribe.  Otherwise, simply click Access Dashboard to go ahead and find your personalized content. If you choose to receive emails of new content, you’ll have to configure it with Windows Live Alerts.  Click Continue to set this up. Select if you want to receive Messenger alerts, emails, and/or text messages when new content is available.  Click Save when you’re finished. Finally, select how often you want to be notified, and then click Access Dashboard to view the content currently available. Finding Content For Your Blog, Site, or Group Now you can find content specified for your interests from the dashboard.  To access the dashboard in the future, simply go to the Broadcaster site and click Sign In. Here you can see available content, and can search for different topics or customize the topics shown. You’ll see snippets of information from various Microsoft videos, articles, whitepapers, eBooks, and more, depending on your settings.  Click the link at the top of the snippet to view the content, or right-click and copy the link to use in emails or on social networks like Twitter. If you’d like to add this snippet to your website or blog, click the Download content link at the bottom.   Now you can preview what the snippet will look like on your site, and change the width or height to fit your site.  You can view and edit the source code of the snippet from the box at the bottom, and then copy it to use on your site. Copy the code, and paste it in the HTML of a blog post, email, webpage, or anywhere else you wish to share it.  Here we’re pasting it into the HTML editor in Windows Live Writer so we can post it to a blog. After adding a title and opening paragraph, we have a nice blog post that only took a few minutes to put together but should still be useful for our readers.  You can check out the blog post we created at the link below. Readers can click on the links, which will direct them to the content on Microsoft’s websites. Conclusion If you frequently need to find educational and informative content about Microsoft products and services, Broadcaster can be a great service to keep you up to date.  The service worked quite good in our tests, and generally found relevant content to our keywords.  We had difficulty embedding links to eBooks that were listed by Broadcaster, but everything else worked for us.  Now you can always have high quality content to help your customers, coworkers, friends, and more, and you just might find something that will help you, too! Link Microsoft Broadcaster (registration required) Example Post at Techinch.com with Content from Microsoft Broadcaster Similar Articles Productive Geek Tips Create An Electronic Business Card In Outlook 2007Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPAnnouncing the How-To Geek BlogsNew Vista Syntax for Opening Control Panel Items from the Command-lineHow To Create and Publish Blog Posts in Word 2010 & 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Fix Common Inkjet Printer Errors Dual Boot Ubuntu and Windows 7 What is HTML5? Default Programs Editor – One great tool for Setting Defaults Convert BMP, TIFF, PCX to Vector files with RasterVect Free Identify Fonts using WhatFontis.com

    Read the article

  • Finding the Right Solution to Source and Manage Your Contractors

    - by mark.rosenberg(at)oracle.com
    Many of our PeopleSoft Enterprise applications customers operate in service-based industries, and all of our customers have at least some internal service units, such as IT, marketing, and facilities. Employing the services of contractors, often referred to as "contingent labor," to deliver either or both internal and external services is common practice. As we've transitioned from an industrial age to a knowledge age, talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services. Talent comes from many sources and the rise in the contingent worker (contractor, consultant, temporary, part time) has increased significantly in the past decade and is expected to reach 40 percent in the next decade. Managing the total pool of talent in a seamless integrated fashion not only saves organizations money and increases efficiency, but creates a better place for workers of all kinds to work. Although the term "contingent labor" is frequently used to describe both contractors and employees who have flexible schedules and relationships with an organization, the remainder of this discussion focuses on contractors. The term "contingent labor" is used interchangeably with "contractor." Recognizing the importance of contingent labor, our PeopleSoft customers often ask our team, "What Oracle vendor management system (VMS) applications should I evaluate for managing contractors?" In response, I thought it would be useful to describe and compare the three most common Oracle-based options available to our customers. They are:   The enterprise licensed software model in which you implement and utilize the PeopleSoft Services Procurement (sPro) application and potentially other PeopleSoft applications;  The software-as-a-service model in which you gain access to a derivative of PeopleSoft sPro from an Oracle Business Process Outsourcing Partner; and  The managed service provider (MSP) model in which staffing industry professionals utilize either your enterprise licensed software or the software-as-a-service application to administer your contingent labor program. At this point, you may be asking yourself, "Why three options?" The answer is that since there is no "one size fits all" in terms of talent, there is also no "one size fits all" for effectively sourcing and managing contingent workers. Various factors influence how an organization thinks about and relates to its contractors, and each of the three Oracle-based options addresses an organization's needs and preferences differently. For the purposes of this discussion, I will describe the options with respect to (A) pricing and software provisioning models; (B) control and flexibility; (C) level of engagement with contractors; and (D) approach to sourcing, employment law, and financial settlement. Option 1:  Enterprise Licensed Software In this model, you purchase from Oracle the license and support for the applications you need. Typically, you license PeopleSoft sPro as your VMS tool for sourcing, monitoring, and paying your contract labor. In conjunction with sPro, you can also utilize PeopleSoft Human Capital Management (HCM) applications (if you do not already) to configure more advanced business processes for recruiting, training, and tracking your contractors. Many customers choose this enterprise license software model because of the functionality and natural integration of the PeopleSoft applications and because the cost for the PeopleSoft software is explicit. There is no fee per transaction to source each contractor under this model. Our customers that employ contractors to augment their permanent staff on billable client engagements often find this model appealing because there are no fees to affect their profit margins. With this model, you decide whether to have your own IT organization run the software or have the software hosted and managed by either Oracle or another application services provider. Your organization, perhaps with the assistance of consultants, configures, deploys, and operates the software for managing your contingent workforce. This model offers you the highest level of control and flexibility since your organization can configure the contractor process flow exactly to your business and security requirements and can extend the functionality with PeopleTools. This option has proven very valuable and applicable to our customers engaged in government contracting because their contingent labor management practices are subject to complex standards and regulations. Customers find a great deal of value in the application functionality and configurability the enterprise licensed software offers for managing contingent labor. Some examples of that functionality are... The ability to create a tiered network of preferred suppliers including competencies, pricing agreements, and elaborate candidate management capabilities. Configurable alerts and online collaboration for bid, resource requisition, timesheet, and deliverable entry, routing, and approval for both resource and deliverable-based services. The ability to manage contractors with the same PeopleSoft HCM and Projects applications that are used to manage the permanent workforce. Because it allows you to utilize much of the same PeopleSoft HCM and Projects application functionality for contractors that you use for permanent employees, the enterprise licensed software model supports the deepest level of engagement with the contingent workforce. For example, you can: fill job openings with contingent labor; guide contingent workers through essential safety and compliance training with PeopleSoft Enterprise Learning Management; and source contingent workers directly to project-based assignments in PeopleSoft Resource Management and PeopleSoft Program Management. This option enables contingent workers to collaborate closely with your permanent staff on complex, knowledge-based efforts - R&D projects, billable client contracts, architecture and engineering projects spanning multiple years, and so on. With the enterprise licensed software model, your organization maintains responsibility for the sourcing, onboarding (including adherence to employment laws), and financial settlement processes. This means your organization maintains on staff or hires the expertise in these domains to utilize the software and interact with suppliers and contractors. Option 2:  Software as a Service (SaaS) The effort involved in setting up and operating VMS software to handle a contingent workforce leads many organizations to seek a system that can be activated and configured within a few days and for which they can pay based on usage. Oracle's Business Process Outsourcing partner, Provade, Inc., provides exactly this option to our customers. Provade offers its vendor management software as a service over the Internet and usually charges your organization a fee that is a percentage of your total contingent labor spending processed through the Provade software. (Percentage of spend is the predominant fee model, although not the only one.) In addition to lower implementation costs, the effort of configuring and maintaining the software is largely upon Provade, not your organization. This can be very appealing to IT organizations that are thinly stretched supporting other important information technology initiatives. Built upon PeopleSoft sPro, the Provade solution is tailored for simple and quick deployment and administration. Provade has added capabilities to clone users rapidly and has simplified business documents, like work orders and change orders, to facilitate enterprise-wide, self-service adoption with little to no training. Provade also leverages Oracle Business Intelligence Enterprise Edition (OBIEE) to provide integrated spend analytics and dashboards. Although pure customization is more limited than with the enterprise licensed software model, Provade offers a very effective option for organizations that are regularly on-boarding and off-boarding high volumes of contingent staff hired to perform discrete support tasks (for example, order fulfillment during the holiday season, hourly clerical work, desktop technology repairs, and so on) or project tasks. The software is very configurable and at the same time very intuitive to even the most computer-phobic users. The level of contingent worker engagement your organization can achieve with the Provade option is generally the same as with the enterprise licensed software model since Provade can automatically establish contingent labor resources in your PeopleSoft applications. Provade has pre-built integrations to Oracle's PeopleSoft and the Oracle E-Business Suite procurement, projects, payables, and HCM applications, so that you can evaluate, train, assign, and track contingent workers like your permanent employees. Similar to the enterprise licensed software model, your organization is responsible for the contingent worker sourcing, administration, and financial settlement processes. This means your organization needs to maintain the staff expertise in these domains. Option 3:  Managed Services Provider (MSP) Whether you are using the enterprise licensed model or the SaaS model, you may want to engage the services of sourcing, employment, payroll, and financial settlement professionals to administer your contingent workforce program. Firms that offer this expertise are often referred to as "MSPs," and they are typically staffing companies that also offer permanent and temporary hiring services. (In fact, many of the major MSPs are Oracle applications customers themselves, and they utilize the PeopleSoft Solution for the Staffing Industry to run their own business operations.) Usually, MSPs place their staff on-site at your facilities, and they can utilize either your enterprise licensed PeopleSoft sPro application or the Provade VMS SaaS software to administer the network of suppliers providing contingent workers. When you utilize an MSP, there is a separate fee for the MSP's service that is typically funded by the participating suppliers of the contingent labor. Also in this model, the suppliers of the contingent labor (not the MSP) usually pay the contingent labor force. With an MSP, you are intentionally turning over business process control for the advantages associated with having someone else manage the processes. The software option you choose will to a certain extent affect your process flexibility; however, the MSPs are often able to adapt their processes to the unique demands of your business. When you engage an MSP, you will want to give some thought to the level of engagement and "partnering" you need with your contingent workforce. Because the MSP acts as an intermediary, it can be very valuable in handling high volume, routine contracting for which there is a relatively low need for "partnering" with the contingent workforce. However, if your organization (or part of your organization) engages contingent workers for high-profile client projects that require diplomacy, intensive amounts of interaction, and personal trust, introducing an MSP into the process may prove less effective than handling the process with your own staff. In fact, in many organizations, it is common to enlist an MSP to handle contractors working on internal projects and to have permanent employees handle the contractor relationships that affect the portion of the services portfolio focused on customer-facing, billable projects. One of the key advantages of enlisting an MSP is that you do not have to maintain the expertise required for orchestrating the sourcing, hiring, and paying of contingent workers.  These are the domain of the MSPs. If your own staff members are not prepared to manage the essential "overhead" processes associated with contingent labor, working with an MSP can make solid business sense. Proper administration of a contingent workforce can make the difference between project success and failure, operating profit and loss, and legal compliance and fines. Concluding Thoughts There is little doubt that thoughtfully and purposefully constructing a service delivery strategy that leverages the strengths of contingent workers can lead to better projects, deliverables, and business results. What requires a bit more thinking is determining the platform (or platforms) that will enable each part of your organization to best deliver on its mission.

    Read the article

  • How to manage PHP projects?

    - by Shakti Singh
    I am a PHP developer and I want to know how to manage a project with more than one developer working on the same time. Are there some tools to manage them if any please let me know I don't know about that? The tool which can show everything which developer is working on what task. when he will be available for another task. Who one is free right now? Something managing project as well as utilization of your team member. Tool which can take care of all the phases of a project from coding to delivery.

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • Energy Firms Targetted for Sensitive Documents

    - by martin.abrahams
    Numerous multinational energy companies have been targeted by hackers who have been focusing on financial documents related to oil and gas field exploration, bidding contracts, and drilling rights, as well as proprietary industrial process documents, according to a new McAfee report. "It ... speaks to quite a sad state of our critical infrastructure security. These were not sophisticated attacks ... yet they were very successful in achieving their goals," said Dmitri Alperovitch, McAfee's vice president for threat research. Apparently, the attacks can be traced back over several years, creating a sustained security compromise that has provided access to highly sensitive information that is of huge financial value to competitors. The value of IRM as an additional layer of protection is clear. Whether your infrastructure security is in a sad state or is state of the art, breaches are always a possibility - and in any case, a lot of sensitive information is shared with third parties whose infrastructure security might not be as good as yours. IRM protects the individual information assets directly so that, even if infrastructure security is compromised, your critical information is enrypted and trackable and only accessible to authenticated, authorised, audited users. The full McAfee report is available here.

    Read the article

  • How to Easily Optimize & Manage Multiple Computers with Soluto

    - by Chris Hoffman
    Soluto is a quick, simple way to optimize and manage one or more computers – it really shines for managing multiple ones. If you’re already tech support for family or friends, Soluto can save you a lot of time. We’ve written about Soluto in the past, when it was in a closed beta. Anyone can now sign up for a free Soluto account and manage up to five computers from the same account. The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • Gladinet Cloud Desktop tool to manage Windows Azure Blob Storage from Windows Explorer

    - by kaleidoscope
    Gladinet Cloud Desktop is designed to make it easier for Windows Azure users to manage Windows Azure Blob storage directly from Windows Explorer. The solution makes it possible for Windows Azure Blob storage to be mapped as a virtual network Drive. “You can map multiple Azure Blob Storage Accounts as side-by-side virtual folders in same network drive. You can do drag & drop between local drive and Azure drive. For more information -  http://www.ditii.com/2010/01/04/gladinet-cloud-desktop-tool-to-manage-windows-azure-blob-storage-from-windows-explorer/ For Downloading the tool – http://www.gladinet.com/p/download_starter_direct.htm   Ritesh, D

    Read the article

  • Best way to manage a changelog

    - by Gnial0id
    I'm currently developing a WinForm application. In order to inform the client about the improvements and corrections made during the last version, I would like to manage and display a changelog. I mostly found existing changelog on website (the term changelog is pretty used) or explanation on how to manage the release numbers, which I don't care. So, these are my questions: Is there a good practice in changelog management (using XML, pure text in the app, etc.) in a desktop application ? What is the best way to display it (external website, inside the winform application) ? Thanks.

    Read the article

  • How to manage the images for my Desktop Application

    - by NonExistent
    What's the better way to manage the image files of my app? i've been thinking about the way that i do right now (save the image as a BLOB IN db), and i ask myself if would be better to manage the image as text in my DB, i mean, convert the image to hex(length of 500), then save in the db as text, and when calling it convert it from hex to image, or something like that, but what do you consider as an Experienced Progammer that is the better way? Maybe the question is too broad, but i need to know that, and nobody answers me anywhere...

    Read the article

  • WMI Rights required to read root\MicrosoftIISv2 in IIS7 with IIS6 compatibility mode

    - by JoeBilly
    I need to manage my IIS7 (Windows Server 2008) remotely with a WMI IIS6 API. So I added the IIS6 WMI Compatibility and IIS6 Metabase Compatibility roles to access the root\MicrosoftIIsv2 namespace. I have a domain account which is not administrator on the remote machine ; with this right, everything is ok. I configured these rights for my domain account to access the root\MicrosoftIIsv2 WMI namespace remotely ; note that these rights work perfectly on a IIS6 and Windows Server 2003 : DCOM : Account in Distributed COM Users Remote & local access to DCOM WMI : Root\CIMV2 (I need access here too) Execute methods, Enable Account, Remote Enable Root\Default (I need access here too) Execute methods, Enable Account, Remote Enable Root\MicrosoftIISv2 Execute methods, Enable Account, Provider Write, Remote Enable IIS Metabase (Metabase Explorer) : LM Full Control (W3SVC inherits these permissions) I tried to give some access on C:\Windows\System32\inetsrv too ; don't know if needed. My issue is : I can't list the IIS WebSites (\root\MicrosoftIISv2:IIsWebServerSetting.Name="W3SVC/*"). I don't get an 'access denied' but nothing is returned. My API and powershell tests can connect and execute queries in the root\MicrosoftIISv2 namespace I can read the IIsComputer class ex: Get-WmiObject IIsComputer -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT * I can't read the IIsWebServerSetting, IIsWebServer ... to list the WebSites : the query returns an empty collection ex: Get-WmiObject IIsWebServerSetting -namespace "ROOT\MicrosoftIISv2" -authentication PacketPrivacy | SELECT ServerComment All queries work perfectly if the account is administrator as already said I am using PacketPrivacy authentication FI: I got a Warning Event 5605 with the Administrator right or not, that does not seem to have an impact : The root\MicrosoftIISv2 namespace is marked with the RequiresEncryption flag. Access to this namespace might be denied if the script or application does not have the appropriate authentication level. Change the authentication level to Pkt_Privacy and run the script or application again Ok, I have some more informations, when I use IIS 6 Metabase Explorer with my administrator account I can see the rights are correctly inherited for my non-administrator account. But when I try to connect using my non-administrator account, I can list the LM node, but get an "access denied, failed to get a key's data" when I try to browse the child nodes. I'll check further. I tried to Trace the WMI Activity, and everything seems OK ; this tends to confirm that the problem lies in IIS Rights.

    Read the article

  • login .bat script in windows xp user to run with admin rights

    - by Kryan
    if exist C:\Windows\System32\CCM\CcmExec.exe ( net use /del Z: ) else ( net use z: \\c-svsccm01\SMS_CMB\Client start /d "Z:\" CCMSetup.exe ) This is a .bat file i created to run a .exe file in the mapped location "Z:\".it runs perfectly in the administrator account but not in the user account(which dosnt hav admin rights to install a .exe file) in the user account, the mapping can be created and deleted but running the CCMSetup.exe dosnt work. pls help how to run this CCMSetup.exe with admin rights in the user account.

    Read the article

  • How much information can websites get about your browser/PC?

    - by Pickledegg
    I am trying to determine if the information shown on this website is the absolute maximum amount of information that a webserver can obtain from a web visitor. Does anyone know of any other sites that will be able to get more information from the user passively like this? I'm not talking about port-sniffing or any kind of interaction from the user, just the info that a server can get from a 'dumb' visit.

    Read the article

  • Developing custom MBeans to manage J2EE Applications (Part III)

    - by philippe Le Mouel
    This is the third and final part in a series of blogs, that demonstrate how to add management capability to your own application using JMX MBeans. In Part I we saw: How to implement a custom MBean to manage configuration associated with an application. How to package the resulting code and configuration as part of the application's ear file. How to register MBeans upon application startup, and unregistered them upon application stop (or undeployment). How to use generic JMX clients such as JConsole to browse and edit our application's MBean. In Part II we saw: How to add localized descriptions to our MBean, MBean attributes, MBean operations and MBean operation parameters. How to specify meaningful name to our MBean operation parameters. We also touched on future enhancements that will simplify how we can implement localized MBeans. In this third and last part, we will re-write our MBean to simplify how we added localized descriptions. To do so we will take advantage of the functionality we already described in part II and that is now part of WebLogic 10.3.3.0. We will show how to take advantage of WebLogic's localization support to localize our MBeans based on the client's Locale independently of the server's Locale. Each client will see MBean descriptions localized based on his/her own Locale. We will show how to achieve this using JConsole, and also using a sample programmatic JMX Java client. The complete code sample and associated build files for part III are available as a zip file. The code has been tested against WebLogic Server 10.3.3.0 and JDK6. To build and deploy our sample application, please follow the instruction provided in Part I, as they also apply to part III's code and associated zip file. Providing custom descriptions take II In part II we localized our MBean descriptions by extending the StandardMBean class and overriding its many getDescription methods. WebLogic 10.3.3.0 similarly to JDK 7 can automatically localize MBean descriptions as long as those are specified according to the following conventions: Descriptions resource bundle keys are named according to: MBean description: <MBeanInterfaceClass>.mbean MBean attribute description: <MBeanInterfaceClass>.attribute.<AttributeName> MBean operation description: <MBeanInterfaceClass>.operation.<OperationName> MBean operation parameter description: <MBeanInterfaceClass>.operation.<OperationName>.<ParameterName> MBean constructor description: <MBeanInterfaceClass>.constructor.<ConstructorName> MBean constructor parameter description: <MBeanInterfaceClass>.constructor.<ConstructorName>.<ParameterName> We also purposely named our resource bundle class MBeanDescriptions and included it as part of the same package as our MBean. We already followed the above conventions when creating our resource bundle in part II, and our default resource bundle class with English descriptions looks like: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "MBean used to manage persistent application properties"}, {"PropertyConfigMXBean.attribute.Properties", "Properties associated with the running application"}, {"PropertyConfigMXBean.operation.setProperty", "Create a new property, or change the value of an existing property"}, {"PropertyConfigMXBean.operation.setProperty.key", "Name that identify the property to set."}, {"PropertyConfigMXBean.operation.setProperty.value", "Value for the property being set"}, {"PropertyConfigMXBean.operation.getProperty", "Get the value for an existing property"}, {"PropertyConfigMXBean.operation.getProperty.key", "Name that identify the property to be retrieved"} }; } } We have now also added a resource bundle with French localized descriptions: package blog.wls.jmx.appmbean; import java.util.ListResourceBundle; public class MBeanDescriptions_fr extends ListResourceBundle { protected Object[][] getContents() { return new Object[][] { {"PropertyConfigMXBean.mbean", "Manage proprietes sauvegarde dans un fichier disque."}, {"PropertyConfigMXBean.attribute.Properties", "Proprietes associee avec l'application en cour d'execution"}, {"PropertyConfigMXBean.operation.setProperty", "Construit une nouvelle proprietee, ou change la valeur d'une proprietee existante."}, {"PropertyConfigMXBean.operation.setProperty.key", "Nom de la propriete dont la valeur est change."}, {"PropertyConfigMXBean.operation.setProperty.value", "Nouvelle valeur"}, {"PropertyConfigMXBean.operation.getProperty", "Retourne la valeur d'une propriete existante."}, {"PropertyConfigMXBean.operation.getProperty.key", "Nom de la propriete a retrouver."} }; } } So now we can just remove the many getDescriptions methods from our MBean code, and have a much cleaner: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Map; import java.util.HashMap; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig extends StandardMBean implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; private static Map operationsParamNames_ = null; static { operationsParamNames_ = new HashMap(); operationsParamNames_.put("setProperty", new String[] {"key", "value"}); operationsParamNames_.put("getProperty", new String[] {"key"}); } public PropertyConfig(String relativePath) throws Exception { super(PropertyConfigMXBean.class , true); props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} protected String getParameterName(MBeanOperationInfo op, MBeanParameterInfo param, int sequence) { return operationsParamNames_.get(op.getName())[sequence]; } } The only reason we are still extending the StandardMBean class, is to override the default values for our operations parameters name. If this isn't a concern, then one could just write the following code: package blog.wls.jmx.appmbean; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.File; import java.net.URL; import java.util.Properties; import javax.management.MBeanServer; import javax.management.ObjectName; import javax.management.MBeanRegistration; import javax.management.StandardMBean; import javax.management.MBeanOperationInfo; import javax.management.MBeanParameterInfo; public class PropertyConfig implements PropertyConfigMXBean, MBeanRegistration { private String relativePath_ = null; private Properties props_ = null; private File resource_ = null; public PropertyConfig(String relativePath) throws Exception { props_ = new Properties(); relativePath_ = relativePath; } public String setProperty(String key, String value) throws IOException { String oldValue = null; if (value == null) { oldValue = String.class.cast(props_.remove(key)); } else { oldValue = String.class.cast(props_.setProperty(key, value)); } save(); return oldValue; } public String getProperty(String key) { return props_.getProperty(key); } public Map getProperties() { return (Map) props_; } private void load() throws IOException { InputStream is = new FileInputStream(resource_); try { props_.load(is); } finally { is.close(); } } private void save() throws IOException { OutputStream os = new FileOutputStream(resource_); try { props_.store(os, null); } finally { os.close(); } } public ObjectName preRegister(MBeanServer server, ObjectName name) throws Exception { // MBean must be registered from an application thread // to have access to the application ClassLoader ClassLoader cl = Thread.currentThread().getContextClassLoader(); URL resourceUrl = cl.getResource(relativePath_); resource_ = new File(resourceUrl.toURI()); load(); return name; } public void postRegister(Boolean registrationDone) { } public void preDeregister() throws Exception {} public void postDeregister() {} } Note: The above would also require changing the operations parameters name in the resource bundle classes. For instance: PropertyConfigMXBean.operation.setProperty.key would become: PropertyConfigMXBean.operation.setProperty.p0 Client based localization When accessing our MBean using JConsole started with the following command line: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -debug We see that our MBean descriptions are localized according to the WebLogic's server Locale. English in this case: Note: Consult Part I for information on how to use JConsole to browse/edit our MBean. Now if we specify the client's Locale as part of the JConsole command line as follow: jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar: $WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote -J-Dweblogic.management.remote.locale=fr-FR -debug We see that our MBean descriptions are now localized according to the specified client's Locale. French in this case: We use the weblogic.management.remote.locale system property to specify the Locale that should be associated with the cient's JMX connections. The value is composed of the client's language code and its country code separated by the - character. The country code is not required, and can be omitted. For instance: -Dweblogic.management.remote.locale=fr We can also specify the client's Locale using a programmatic client as demonstrated below: package blog.wls.jmx.appmbean.client; import javax.management.MBeanServerConnection; import javax.management.ObjectName; import javax.management.MBeanInfo; import javax.management.remote.JMXConnector; import javax.management.remote.JMXServiceURL; import javax.management.remote.JMXConnectorFactory; import java.util.Hashtable; import java.util.Set; import java.util.Locale; public class JMXClient { public static void main(String[] args) throws Exception { JMXConnector jmxCon = null; try { JMXServiceURL serviceUrl = new JMXServiceURL( "service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime"); System.out.println("Connecting to: " + serviceUrl); // properties associated with the connection Hashtable env = new Hashtable(); env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); String[] credentials = new String[2]; credentials[0] = "weblogic"; credentials[1] = "weblogic"; env.put(JMXConnector.CREDENTIALS, credentials); // specifies the client's Locale env.put("weblogic.management.remote.locale", Locale.FRENCH); jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env); jmxCon.connect(); MBeanServerConnection con = jmxCon.getMBeanServerConnection(); Set mbeans = con.queryNames( new ObjectName( "blog.wls.jmx.appmbean:name=myAppProperties,type=PropertyConfig,*"), null); for (ObjectName mbeanName : mbeans) { System.out.println("\n\nMBEAN: " + mbeanName); MBeanInfo minfo = con.getMBeanInfo(mbeanName); System.out.println("MBean Description: "+minfo.getDescription()); System.out.println("\n"); } } finally { // release the connection if (jmxCon != null) jmxCon.close(); } } } The above client code is part of the zip file associated with this blog, and can be run using the provided client.sh script. The resulting output is shown below: $ ./client.sh Connecting to: service:jmx:iiop://127.0.0.1:7001/jndi/weblogic.management.mbeanservers.runtime MBEAN: blog.wls.jmx.appmbean:type=PropertyConfig,name=myAppProperties MBean Description: Manage proprietes sauvegarde dans un fichier disque. $ Miscellaneous Using Description annotation to specify MBean descriptions Earlier we have seen how to name our MBean descriptions resource keys, so that WebLogic 10.3.3.0 automatically uses them to localize our MBean. In some cases we might want to implicitly specify the resource key, and resource bundle. For instance when operations are overloaded, and the operation name is no longer sufficient to uniquely identify a single operation. In this case we can use the Description annotation provided by WebLogic as follow: import weblogic.management.utils.Description; @Description(resourceKey="myapp.resources.TestMXBean.description", resourceBundleBaseName="myapp.resources.MBeanResources") public interface TestMXBean { @Description(resourceKey="myapp.resources.TestMXBean.threshold.description", resourceBundleBaseName="myapp.resources.MBeanResources" ) public int getthreshold(); @Description(resourceKey="myapp.resources.TestMXBean.reset.description", resourceBundleBaseName="myapp.resources.MBeanResources") public int reset( @Description(resourceKey="myapp.resources.TestMXBean.reset.id.description", resourceBundleBaseName="myapp.resources.MBeanResources", displayNameKey= "myapp.resources.TestMXBean.reset.id.displayName.description") int id); } The Description annotation should be applied to the MBean interface. It can be used to specify MBean, MBean attributes, MBean operations, and MBean operation parameters descriptions as demonstrated above. Retrieving the Locale associated with a JMX operation from the MBean code There are several cases where it is necessary to retrieve the Locale associated with a JMX call from the MBean implementation. For instance this can be useful when localizing exception messages. This can be done as follow: import weblogic.management.mbeanservers.JMXContextUtil; ...... // some MBean method implementation public String setProperty(String key, String value) throws IOException { Locale callersLocale = JMXContextUtil.getLocale(); // use callersLocale to localize Exception messages or // potentially some return values such a Date .... } Conclusion With this last part we conclude our three part series on how to write MBeans to manage J2EE applications. We are far from having exhausted this particular topic, but we have gone a long way and are now capable to take advantage of the latest functionality provided by WebLogic's application server to write user friendly MBeans.

    Read the article

  • Google image Swirl - interactive information visualization

    - by skyde
    I have seen this image swirl effect on a visual thesaurus. Is there any open source code for this? Or research paper explaining how they made it. I don't care about the algorithm to match similar objects. I only am wondering about the effects. From what i understand they are called recursive orbital diagram. Screenshot: google Wonder Wheel google image swirl

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >