Search Results

Search found 11565 results on 463 pages for 'variable expansion'.

Page 202/463 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • .htaccess to pass FULL URL into other PHP link as $_GET

    - by 4lvin
    From .htacces file, how to pass/redirect the FULL URL to another URL as GET Variable? Like: http://www.test.com/foo/bar.asp will be redirected to: http://www.newsite.com/?url=http://www.test.com/foo/bar.asp With Full Url with Domain.I tried: RewriteEngine on RedirectMatch 301 ^/(.*)\.asp http://www.newsite.com/?url=%{REQUEST_URI} But it is going out like: http://www.newsite.com/?url=?%{REQUEST_URI}

    Read the article

  • How to get the Date in a batch file in a predictable format?

    - by AngryHacker
    In a batch file I need to extract a month, day, year from the date command. So I used the following, which essentially parses the Date command to extract its sub strings into a variable: set Day=%Date:~3,2% set Mth=%Date:~0,2% set Yr=%Date:~6,4% This is all great, but if I deploy this batch file to a machine with a different regional/country settings, it fails because month, day and year are in different locations. How can I extract month, day and year regardless of the date format?

    Read the article

  • Availability of big files on multiple servers

    - by Imises
    I have to handle many (1'000 - 30'000) big files ranging from 200MB up to 2GB. The demand for these files is variable (0 - 300 downloads / file). This is why a single file must saved on 2 or more servers. My servers are placed in different datacenters (France), with different size HDDs (750GB to 4TB). Currently I share the files using PHP and ncftpget / ncftpput, but it's very slow. I need a solution to handle balancing these files across 7+ servers.

    Read the article

  • cygwin, PATH problem?

    - by jayjaypg22
    I run a .ksh containing a awk call. awk.exe and his shortcut awk is in /bin/awk, /bin is in the PATH environment variable. But when I try to launch awk, I have this error message : bash: /usr/bin/awk: no such file or directory Why didn't bash look for it in the /bin folder too? edit : tar has the same rights, tar.exe is in /bin and can be listed in /usr/bin/, the exact same way than awk. Tar works fine whereas awk not.

    Read the article

  • Google chrome proxy authentication dialogue timeout

    - by Nihar Sarangi
    I am on a network that uses LDAP proxy for authentication based on a username and password. Whenever I start Google Chrome, it pops up with a proxy authentication dialogue, but the dialogue disappears automatically after variable amount of time (sometimes it stays for 5 seconds some times less than 1 second). I have found the same issue with Chromium also. Is there any configuration I can set to control this timeout, or say, auto-authenticate with my authentication details from the shell or DE (Gnome3 on Arch)?

    Read the article

  • Creating a menu using xslt for Umbraco

    - by rob_g
    I've created a menu in umbraco using XSLT. The menu is using the usual ul and li elements and I'm displaying only the first level of the menu. The aim is to create a menu that expands to show the sub menu when I click a parent node (in the top level). I am after the xslt I would need to expose the sub menu when clicked. I think I would need to make use of ancestor-or-self to detect the current menu and parent menu and display them and also the $currentPage variable. I have the following xslt: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE xsl:stylesheet [ <!ENTITY nbsp "&#x00A0;"> ]> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxml="urn:schemas-microsoft-com:xslt" xmlns:umbraco.library="urn:umbraco.library" xmlns:Exslt.ExsltCommon="urn:Exslt.ExsltCommon" xmlns:Exslt.ExsltDatesAndTimes="urn:Exslt.ExsltDatesAndTimes" xmlns:Exslt.ExsltMath="urn:Exslt.ExsltMath" xmlns:Exslt.ExsltRegularExpressions="urn:Exslt.ExsltRegularExpressions" xmlns:Exslt.ExsltStrings="urn:Exslt.ExsltStrings" xmlns:Exslt.ExsltSets="urn:Exslt.ExsltSets" xmlns:tagsLib="urn:tagsLib" xmlns:urlLib="urn:urlLib" exclude-result-prefixes="msxml umbraco.library Exslt.ExsltCommon Exslt.ExsltDatesAndTimes Exslt.ExsltMath Exslt.ExsltRegularExpressions Exslt.ExsltStrings Exslt.ExsltSets tagsLib urlLib "> <xsl:output method="xml" omit-xml-declaration="yes"/> <xsl:param name="currentPage"/> <xsl:template match="/"> <div id="kb-categories"> <h3>Categories</h3> <xsl:call-template name="drawNodes"> <xsl:with-param name="parent" select="$currentPage/ancestor-or-self::node [@level=1]"/> </xsl:call-template> </div> </xsl:template> <xsl:template name="drawNodes"> <xsl:param name="parent"/> <xsl:if test="(umbraco.library:IsProtected($parent/@id, $parent/@path) = 0 or (umbraco.library:IsProtected($parent/@id, $parent/@path) = 1)) and $parent/@level = 1"> <ul class="kb-menuLevel1" > <xsl:for-each select="$parent/node [string(./data [@alias='showInMenu']) = 1]"> <li> <a href="/kb{umbraco.library:NiceUrl(@id)}"> <xsl:value-of select="@nodeName"/> </a> <xsl:variable name="level" select="@level" /> <xsl:if test="(count(./node [string(./data [@alias='showInMenu']) = '1']) &gt; 0)"> <xsl:call-template name="drawNodes"> <xsl:with-param name="parent" select="."/> </xsl:call-template> </xsl:if> </li> </xsl:for-each> </ul> </xsl:if> <xsl:if test="(umbraco.library:IsProtected($parent/@id, $parent/@path) = 0 or (umbraco.library:IsProtected($parent/@id, $parent/@path) = 1)) and $parent/@level &gt; 1"> <ul class="kb-menuLevel{@level}" style="display: none;"> <xsl:for-each select="$parent/node [string(./data [@alias='showInMenu']) = 1]"> <li> <a href="/kb{umbraco.library:NiceUrl(@id)}"> <xsl:value-of select="@nodeName"/> </a> <xsl:variable name="level" select="@level" /> <xsl:if test="(count(./node [string(./data [@alias='showInMenu']) = '1']) &gt; 0)"> <xsl:call-template name="drawNodes"> <xsl:with-param name="parent" select="."/> </xsl:call-template> </xsl:if> </li> </xsl:for-each> </ul> </xsl:if> </xsl:template> </xsl:stylesheet> I suspect this could be improved using apply-templates, but I'm not yet up to speed with that (this being only the second day of my learning xslt). My menu: Item 1 Item 2 Item 3 Item 4 when I click on Item 2 I want to see it's child menu too: Item 1 Item 2 -- Item 2.1 -- Item 2.2 Item 3 Item 4 and so on down the nested menu.

    Read the article

  • Accessing py2exe program over network in Windows 98 throws ImportErrors

    - by darvids0n
    I'm running a py2exe-compiled python program from one server machine on a number of client machines (mapped to a network drive on every machine, say W:). For Windows XP and later machines, have so far had zero problems with Python picking up W:\python23.dll (yes, I'm using Python 2.3.5 for W98 compatibility and all that). It will then use W:\zlib.pyd to decompress W:\library.zip containing all the .pyc files like os and such, which are then imported and the program runs no problems. The issue I'm getting is on some Windows 98 SE machines (note: SOME Windows 98 SE machines, others seem to work with no apparent issues). What happens is, the program runs from W:, the W:\python23.dll is, I assume, found (since I'm getting Python ImportErrors, we'd need to be able to execute a Python import statement), but a couple of things don't work: 1) If W:\library.zip contains the only copy of the .pyc files, I get ZipImportError: can't decompress data; zlib not available (nonsense, considering W:\zlib.pyd IS available and works fine with the XP and higher machines on the same network). 2) If the .pyc files are actually bundled INSIDE the python exe by py2exe, OR put in the same directory as the .exe, OR put into a named subdirectory which is then set as part of the PYTHONPATH variable (e.g W:\pylib), I get ImportError: no module named os (os is the first module imported, before sys and anything else). Come to think of it, sys.path wouldn't be available to search if os was imported before it maybe? I'll try switching the order of those imports but my question still stands: Why is this a sporadic issue, working on some networks but not on others? And how would I force Python to find the files that are bundled inside the very executable I run? I have immediate access to the working Windows 98 SE machine, but I only get access to the non-working one (a customer of mine) every morning before their store opens. Thanks in advance! EDIT: Okay, big step forward. After debugging with PY2EXE_VERBOSE, the problem occurring on the specific W98SE machine is that it's not using the right path syntax when looking for imports. Firstly, it doesn't seem to read the PYTHONPATH environment variable (there may be a py2exe-specific one I'm not aware of, like PY2EXE_VERBOSE). Secondly, it only looks in one place before giving up (if the files are bundled inside the EXE, it looks there. If not, it looks in library.zip). EDIT 2: In fact, according to this, there is a difference between the sys.path in the Python interpreter and that of Py2exe executables. Specifically, sys.path contains only a single entry: the full pathname of the shared code archive. Blah. No fallbacks? Not even the current working directory? I'd try adding W:\ to PATH, but py2exe doesn't conform to any sort of standards for locating system libraries, so it won't work. Now for the interesting bit. The path it tries to load atexit, os, etc. from is: W:\\library.zip\<module>.<ext> Note the single slash after library.zip, but the double slash after the drive letter (someone correct me if this is intended and should work). It looks like if this is a string literal, then since the slash isn't doubled, it's read as an (invalid) escape sequence and the raw character is printed (giving W:\library.zipos.pyd, W:\library.zipos.dll, ... instead of with a slash); if it is NOT a string literal, the double slash might not be normpath'd automatically (as it should be) and so the double slash confuses the module loader. Like I said, I can't just set PYTHONPATH=W:\\library.zip\\ because it ignores that variable. It may be worth using sys.path.append at the start of my program but hard-coding module paths is an absolute LAST resort, especially since the problem occurs in ONE configuration of an outdated OS. Any ideas? I have one, which is to normpath the sys.path.. pity I need os for that. Another is to just append os.getenv('PATH') or os.getenv('PYTHONPATH') to sys.path... again, needing the os module. The site module also fails to initialise, so I can't use a .pth file. I also recently tried the following code at the start of the program: for pth in sys.path: fErr.write(pth) fErr.write(' to ') pth.replace('\\\\','\\') # Fix Windows 98 pathing issues fErr.write(pth) fErr.write('\n') But it can't load linecache.pyc, or anything else for that matter; it can't actually execute those commands from the looks of things. Is there any way to use built-in functionality which doesn't need linecache to modify the sys.path dynamically? Or am I reduced to hard-coding the correct sys.path?

    Read the article

  • Session caching problem

    - by Levani
    I have a strange problem with php sessions. I use them for authorization on my site. I store two variables - currently logged in user's id and username in session. When I log in with one username, than log out and log in again with another username the previous user's id is returned using the session variable instead of the current user. The most strange thing is that this happens only when it comes to insert some data into database. When I directly echo this variable the correct id is displayed, but when I insert new record into database this variable sends incorrect id. Here is the php code I use for sending data into database: <?php session_start(); //connect database require_once 'dbc.php'; $authorID = $_SESSION['user_id']; if ( $authorID != 0 ) { $content = htmlentities($_POST["answ_content"],ENT_COMPAT,'UTF-8'); $dro = date('Y-m-d H:i:s'); $qID = $_POST["question_ID"]; $author = 'avtori'; $sql="INSERT INTO comments (comment_ID, comment_post_ID, comment_author, comment_date, comment_content, user_id) VALUES (NULL, '$qID', '$author', '$dro', '$content', '$authorID')"; $result = mysql_query($sql); } else { echo 'error'; } ?> Can anyone please help? Here is the logout function: function logout() { global $db; session_start(); if(isset($_SESSION['user_id']) || isset($_COOKIE['user_id'])) { mysql_query("update `users` set `ckey`= '', `ctime`= '' where `id`='$_SESSION[user_id]' OR `id` = '$_COOKIE[user_id]'") or die(mysql_error()); } /************ Delete the sessions****************/ unset($_SESSION['user_id']); unset($_SESSION['user_name']); unset($_SESSION['user_level']); unset($_SESSION['HTTP_USER_AGENT']); session_unset(); session_destroy(); /* Delete the cookies*******************/ setcookie("user_id", '', time()-60*60*24*COOKIE_TIME_OUT, "/"); setcookie("user_name", '', time()-60*60*24*COOKIE_TIME_OUT, "/"); setcookie("user_key", '', time()-60*60*24*COOKIE_TIME_OUT, "/"); header("Location: index.php"); } Here is the authentication script: include 'dbc.php'; $err = array(); foreach($_GET as $key => $value) { $get[$key] = filter($value); //get variables are filtered. } if ($_POST['doLogin']=='Login') { foreach($_POST as $key => $value) { $data[$key] = filter($value); // post variables are filtered } $user_email = $data['usr_email']; $pass = $data['pwd']; if (strpos($user_email,'@') === false) { $user_cond = "user_name='$user_email'"; } else { $user_cond = "user_email='$user_email'"; } $result = mysql_query("SELECT `id`,`pwd`,`full_name`,`approved`,`user_level` FROM users WHERE $user_cond AND `banned` = '0' ") or die (mysql_error()); $num = mysql_num_rows($result); // Match row found with more than 1 results - the user is authenticated. if ( $num > 0 ) { list($id,$pwd,$full_name,$approved,$user_level) = mysql_fetch_row($result); if(!$approved) { //$msg = urlencode("Account not activated. Please check your email for activation code"); $err[] = "Account not activated. Please check your email for activation code"; //header("Location: login.php?msg=$msg"); //exit(); } //check against salt if ($pwd === PwdHash($pass,substr($pwd,0,9))) { // this sets session and logs user in session_start(); session_regenerate_id (true); //prevent against session fixation attacks. // this sets variables in the session $_SESSION['user_id']= $id; $_SESSION['user_name'] = $full_name; $_SESSION['user_level'] = $user_level; $_SESSION['HTTP_USER_AGENT'] = md5($_SERVER['HTTP_USER_AGENT']); //update the timestamp and key for cookie $stamp = time(); $ckey = GenKey(); mysql_query("update users set `ctime`='$stamp', `ckey` = '$ckey' where id='$id'") or die(mysql_error()); //set a cookie if(isset($_POST['remember'])){ setcookie("user_id", $_SESSION['user_id'], time()+60*60*24*COOKIE_TIME_OUT, "/"); setcookie("user_key", sha1($ckey), time()+60*60*24*COOKIE_TIME_OUT, "/"); setcookie("user_name",$_SESSION['user_name'], time()+60*60*24*COOKIE_TIME_OUT, "/"); } if(empty($err)){ header("Location: myaccount.php"); } } else { //$msg = urlencode("Invalid Login. Please try again with correct user email and password. "); $err[] = "Invalid Login. Please try again with correct user email and password."; //header("Location: login.php?msg=$msg"); } } else { $err[] = "Error - Invalid login. No such user exists"; } }

    Read the article

  • Dynamic scoping in Clojure?

    - by j-g-faustus
    Hi, I'm looking for an idiomatic way to get dynamically scoped variables in Clojure (or a similar effect) for use in templates and such. Here is an example problem using a lookup table to translate tag attributes from some non-HTML format to HTML, where the table needs access to a set of variables supplied from elsewhere: (def *attr-table* ; Key: [attr-key tag-name] or [boolean-function] ; Value: [attr-key attr-value] (empty array to ignore) ; Context: Variables "tagname", "akey", "aval" '( ; translate :LINK attribute in <a> to :href [:LINK "a"] [:href aval] ; translate :LINK attribute in <img> to :src [:LINK "img"] [:src aval] ; throw exception if :LINK attribute in any other tag [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules ; ignore string keys, used for internal bookkeeping [(string? akey)] [] )) ; ignore I want to be able to evaluate the rules (left hand side) as well as the result (right hand side), and need some way to put the variables in scope at the location where the table is evaluated. I also want to keep the lookup and evaluation logic independent of any particular table or set of variables. I suppose there are similar issues involved in templates (for example for dynamic HTML), where you don't want to rewrite the template processing logic every time someone puts a new variable in a template. Here is one approach using global variables and bindings. I have included some logic for the table lookup: ;; Generic code, works with any table on the same format. (defn rule-match? [rule-val test-val] "true if a single rule matches a single argument value" (cond (not (coll? rule-val)) (= rule-val test-val) ; plain value (list? rule-val) (eval rule-val) ; function call :else false )) (defn rule-lookup [test-val rule-table] "looks up rule match for test-val. Returns result or nil." (loop [rules (partition 2 rule-table)] (when-not (empty? rules) (let [[select result] (first rules)] (if (every? #(boolean %) (map rule-match? select test-val)) (eval result) ; evaluate and return result (recur (rest rules)) ))))) ;; Code specific to *attr-table* (def tagname) ; need these globals for the binding in html-attr (def akey) (def aval) (defn html-attr [tagname h-attr] "converts to html attributes" (apply hash-map (flatten (map (fn [[k v :as kv]] (binding [tagname tagname akey k aval v] (or (rule-lookup [k tagname] *attr-table*) kv))) h-attr )))) (defn test-attr [] "test conversion" (prn "a" (html-attr "a" {:LINK "www.google.com" "internal" 42 :title "A link" })) (prn "img" (html-attr "img" {:LINK "logo.png" }))) user=> (test-attr) "a" {:href "www.google.com", :title "A link"} "img" {:src "logo.png"} This is nice in that the lookup logic is independent of the table, so it can be reused with other tables and different variables. (Plus of course that the general table approach is about a quarter of the size of the code I had when I did the translations "by hand" in a giant cond.) It is not so nice in that I need to declare every variable as a global for the binding to work. Here is another approach using a "semi-macro", a function with a syntax-quoted return value, that doesn't need globals: (defn attr-table [tagname akey aval] `( [:LINK "a"] [:href ~aval] [:LINK "img"] [:src ~aval] [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules [(string? ~akey)] [] ))) Only a couple of changes are needed to the rest of the code: In rule-match?, when syntax-quoted the function call is no longer a list: - (list? rule-val) (eval rule-val) + (seq? rule-val) (eval rule-val) In html-attr: - (binding [tagname tagname akey k aval v] - (or (rule-lookup [k tagname] *attr-table*) kv))) + (or (rule-lookup [k tagname] (attr-table tagname k v)) kv))) And we get the same result without globals. (And without dynamic scoping.) Are there other alternatives to pass along sets of variable bindings declared elsewhere, without the globals required by Clojure's binding? Is there an idiomatic way of doing it, like Ruby's binding or Javascript's function.apply(context)?

    Read the article

  • Writing an optimised and efficient search engine with mySQL and ColdFusion

    - by Mel
    I have a search page with the following scenarios listed below. I was told there was a better way to do it, but not how, and that I am using too many if statements, and that it's prone to causing an error through url manipulation: Search.cfm will processes a search made from a search bar present on all pages, with one search input (titleName). If search.cfm is accessed manually (through URL not through using the simple search bar on all pages) it displays an advanced search form with three inputs (titleName, genreID, platformID) or it evaluates searchResponse variable and decides what to do. If simple search query is blank, has no results, or less than 3 characters it displays an error If advanced search query is blank, has no results, or less than 3 characters it displays an error If any successful search returns results, they come back normally. The top-of-page logic is as follows: <!---SET DEFAULT VARIABLE---> <cfparam name="variables.searchResponse" default=""> <!---CHECK TO SEE IF SIMPLE SEARCH A FORM WAS SUBMITTED AND EXECUTE SEARCH IF IT WAS---> <cfif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.simpleSearch") AND Len(Trim(Form.titleName)) GTE 3> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="simpleSearch" searchString="#Form.titleName#" returnvariable="simpleSearchResult"> <cfset variables.searchResponse = "simpleSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.simpleSearchResult") AND simpleSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> <!---CHECK IF ADVANCED SEARCH FORM WAS SUBMITTED---> <cfif IsDefined("Form.AdvancedSearch") AND Len(Trim(Form.titleName)) LTE 2> <cfset variables.searchResponse = "invalidString"> <cfelseif IsDefined("Form.advancedSearch") AND Len(Trim(Form.titleName)) GTE 2> <!---EXECUTE METHOD AND GET DATA---> <cfinvoke component="myComponent" method="advancedSearch" returnvariable="advancedSearchResult" titleName="#Form.titleName#" genreID="#Form.genreID#" platformID="#Form.platformID#"> <cfset variables.searchResponse = "advancedSearchResult"> </cfif> <!---CHECK IF ANY RECORDS WERE FOUND---> <cfif IsDefined("variables.advancedSearchResult") AND advancedSearchResult.RecordCount IS 0> <cfset variables.searchResponse = "noResult"> </cfif> I'm using the searchResponse variable to decide what the the page displays, based on the following scenarios: <!---ALWAYS DISPLAY SIMPLE SEARCH BAR AS IT'S PART OF THE HEADER---> <form name="simpleSearch" action="search.cfm" method="post"> <input type="hidden" name="simpleSearch" /> <input type="text" name="titleName" /> <input type="button" value="Search" onclick="form.submit()" /> </form> <!---IF NO SEARCH WAS SUBMITTED DISPLAY DEFAULT FORM---> <cfif searchResponse IS ""> <h1>Advanced Search</h1> <!---DISPLAY FORM---> <form name="advancedSearch" action="search.cfm" method="post"> <input type="hidden" name="advancedSearch" /> <input type="text" name="titleName" /> <input type="text" name="genreID" /> <input type="text" name="platformID" /> <input type="button" value="Search" onclick="form.submit()" /> </form> </cfif> <!---IF SEARCH IS BLANK OR LESS THAN 3 CHARACTERS DISPLAY ERROR MESSAGE---> <cfif searchResponse IS "invalidString"> <cfoutput> <h1>INVALID SEARCH</h1> </cfoutput> </cfif> <!---IF SEARCH WAS MADE BUT NO RESULTS WERE FOUND---> <cfif searchResponse IS "noResult"> <cfoutput> <h1>NO RESULT FOUND</h1> </cfoutput> </cfif> <!---IF SIMPLE SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "simpleSearchResult"> <cfoutput> <h1>Search Results</h1> </cfoutput> <cfoutput query="simpleSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> <!---IF ADVANCED SEARCH WAS MADE A RESULT WAS FOUND---> <cfif searchResponse IS "advancedSearchResult"> <cfoutput> <h1>Search Results</h1> <p>Your search for "#Form.titleName#" returned #advancedSearchResult.RecordCount# result(s).</p> </cfoutput> <cfoutput query="advancedSearchResult"> <!---DISPLAY QUERY DATA---> </cfoutput> </cfif> Is my logic a) not efficient because my if statements/is there a better way to do this? And b) Can you see any scenarios where my code can break? I've tested it but I have not been able to find any issues with it. And I have no way of measuring performance. Any thoughts and ideas would be greatly appreciated. Many thanks

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • Silverlight Cream for March 31, 2010 -- #826

    - by Dave Campbell
    In this Issue: Andrea Boschin, Radenko Zec, Andrej Tozon, Bobby Diaz, Brad Abrams, Wolf Schmidt, Colin Eberhardt, Anand Iyer, Matthias Shapiro, Jaime Rodriguez, Bill Reiss, and Lee. Shoutouts: Cigdem has a post up about here MIX10 Interviewing experiences: MIX10 SilverlightShow Interviews Ian T. Lackey has his material up from his talk Silverlight SEO at the St. Louis .Net Users Group Not Silverlight but definitely WP7 cool, Michael Klucher reports that there are New Windows Phone Samples on Creators Club Online Tim Heuer posted a survey: What tools are the minimum to get started in Silverlight? From SilverlightCream.com: A RoleManager to apply roles declaratively to user interface Andrea Boschin also has a new post at SilverlightShow discussing the use of a RoleManager in WCF RIA Services to apply user roles to elements of the UI... good stuff, Andrea. Virtualization in Silverlight 4 RC Radenko Zec has a post out at SilverlightShow where he explains UI and Data Virtualization then gives some examples of their use in Silverlight 4RC, and some issues as well. MS Word Mail Merge with Silverlight 4 COM Automation Andrej Tozon has a post up at SilverlightShow that I missed in the rush of MIX10. He's doing MailMerge with COM automation and Silverlight 4... actually prett cool stuff and all the source! KISS and Tell - MVVM and the ViewModelLocator Bobby Diaz is blogging about a very popular subject right now: ViewModelLocator. He's not showing production code, but it's a thought... check it out. Silverlight 4 + RIA Services - Ready for Business: Validating Data I'm running behind, but Brad Abrams' next post in his series is about validating data in the business application. He also discusses setting up shared code validation. A One-stop Shopping XAML Namespace for Silverlight Client SDK Controls Wolf Schmidt at the Silverlight SDK has a post up highlighting the SL4 XAML namespace prefix. He starts with SL3 then demonstrates the feature's use in SL4. Binding a Silverlight 3 DataGrid to dynamic data via IDictionary (Updated) Colin Eberhardt has an update to his previous article of the same title. This one is a bug fix on an upgrade to SL3 and also an expansion of the previous post. Demo Apps from MIX10 on Windows Phone 7 Anand Iyer posted links to all the WP7 demos used at MIX10 and at least in the case of FourSquare, the source is on CodePlex. XAML Files for Location Visualizations in Silverlight and WPF Matthias Shapiro has graciously provided XAML for us for Silverlight and WPF for a bunch of different US maps... too cool, now we don't have to be asking 'where did you get that map?'... thanks Matthias! Theming in Windows Phone Jaime Rodriguez has a post up that deep-dives theming in general and demonstrates using it on WP7... end-user configurations and developer stuff. Space Rocks game step 7: Moving the ship It appears that in the heat of battle (blogging) I said Bill Reiss' Space Rocks game he's building is for WP7... obviously it's not, but it's a game folks... :) THis is Episode 7 and he's moving the ship now. SL4(RC) RichTextBox and Access Violation Lee has some code that looks like it should work for a RichTextBox in SL4RC, and it's throwing an error... see if you have a solution for him... or is it a bug? Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • Bash completion doesn't work, or is ignoring what I've typed; but works for commands

    - by Neil Traft
    Bash completion seems to be ignoring what I've typed (it tries to complete, but acts as if there's nothing under the cursor). I know I saw it work on this machine earlier today, but I'm not sure what has changed. Some examples: cd shows all directories under my current folder: $ cd co<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ Commands like ls and less show all files and directories under my current folder: $ ls co<tab><tab> cmake/ config/ .cproject Doxyfile.in include/ programs/ README.txt src/ tests/ CMakeLists.txt COPYING.txt doc/ examples/ mainpage.dox .project sandbox/ .svn/ Even when I try to complete things from a different folder, it gives me only the results for my current folder (telling me that it is completely ignoring what I've typed): $ cd ~/D<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ But it seems to be working fine for commands and variables: $ if<tab><tab> if ifconfig ifdown ifnames ifquery ifup $ echo $P<tab><tab> $PATH $PIPESTATUS $PPID $PS1 $PS2 $PS4 $PWD $PYTHONPATH I do have this bit in my .bashrc, and I have confirmed that my .bashrc is indeed getting sourced: if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi I've even tried manually executing that file, but it doesn't fix the problem: $ . /etc/bash_completion There was even one point in time where it was working for ls, but was not working for cd ... but I can't replicate that result now. Update: I also just discovered that I have terminals open from earlier that still work. I ran source .bashrc in one of them and afterwards completion was broken. Here is my .bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # # Modified by Neil Traft #source ~/.profile # Allow globs to expand hidden files shopt -s dotglob nullglob # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines or lines starting with space in the history. # See bash(1) for more options HISTCONTROL=ignoreboth # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # If set, the pattern "**" used in a pathname expansion context will # match all files and zero or more directories and subdirectories. #shopt -s globstar # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # Color the prompt export PS1="\[$(tput setaf 2)\]\u@\h:\[$(tput setaf 5)\]\W\[$(tput setaf 2)\] $\[$(tput sgr0)\] " # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi

    Read the article

  • Do MORE with WebCenter

    - by Michael Snow
    We’ve been extremely busy here on the Oracle WebCenter team. We hope that you’ve all be keeping up with the interesting news each week. Last week was jammed full of GartnerPCC and Gartner360 buzz. If you missed any of the highlights – be sure to check out both Kellsey’s post from last week: Gartner PCC: A Shovel & Some Ah-Ha's and Christie’s overview of Loren Weinberg’s PCC presentation: "Here Today, Gone Tomorrow: Engage Your Customers or Lose Them"  . This week, we’ll be focusing on “Doing More with WebCenter” leading up to a great webcast scheduled for Thursday, March 22 (invite and registration link below). This is the 2nd in a series of 3 webcasts dedicated to expanding the understanding of the full capabilities of WebCenter. Yes – that might mean that you are not getting the full benefits of the software you already own or the expansion potential via upgrade to the full WebCenter Suite Plus. Tune in on Thursday 10 a.m. PT / 1 p.m. ET.  ++++++++++++++ Want to be a Speaker at Oracle OpenWorld 2012? Oracle Open World planning has already kicked off. We know that it is only March and next October is far in the distance. But planning has already started for Oracle OpenWorld 2012. So if you want to be a speaker and propose your own session for this year's event in San Francisco on September 30th - October 4th, starting thinking now!  The annual OpenWorld Call for Papers is now open until April 9th! All of the details to submit a paper are available here. Of course, the WebCenter team here is interested in sessions including case studies, thought-leadership, customer stories around any of the Oracle WebCenter solutions, but the Call for Papers is open to all Oracle topics. When submitting your topic, be sure to describe what you plan to discuss and the value of the presentation to other attendees. Sell your session, because there will be a lot of competition to be selected.  Bonus News: Speakers for selected sessions receive a complimentary full conference pass! Get your papers in and we'll see you in San Francisco! ~~~~~~~~~~~~~~~~~~~~~~ Webcast Series: Do More with Oracle WebCenter - Expand Beyond Content Management Enable Employees, Partners, and Customers to Do More with Your Content Dear [FIRSTNAME] [LASTNAME],-- Did you know that, in addition to content management, Oracle WebCenter now also includes comprehensive portal, composite application, collaboration, and Web experience management capabilities? Join us for this Webcast and learn how you can provide a new level of user engagement. Learn how Oracle WebCenter: Drives task-specific application data and content to a single screen for executing specific business processes Enables mixed internal and external environments where content can be securely shared and filtered with employees, partners, and customers, based upon role-based security Offers Web experience management, driving contextually relevant, social, and interactive online experiences across multiple channels Provides social features that enable sharing, activity feeds, collaboration, expertise location, and best-practices communities Learn how to do more with Oracle WebCenter. Register now for the Webcast. Register Now Join us for the second Webcast in the series "Do More With Oracle WebCenter". March 22, 2012 10 a.m. PT / 1 p.m. ET Presented by: Michelle Huff Senior Director, WebCenter Product Management, Oracle Greg Utecht Project Manager,IT Operations,TIES Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices | Privacy Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Java in Flux: Utopia or Deuteranopia?

    - by Tori Wieldt
    What a difference a year makes, indeed. Steve Harris, Senior VP, App Server Dev, Oracle and Adam Messinger, VP, Fusion Middleware Group, Oracle presented an informative keynote at the TheServerSide Java Symposium today. With a title "Java in Flux: Utopia or Deuteranopia?" you know things are going to be interesting (see Aeon Flux if you don't get the title reference).What a YearThey started with a little background, explaining that the reactions to Oracle's acquisition of Sun (and therefore Java) one year ago varied greatly, from "Freak Out!" to "Don't Panic." From the Oracle perspective, being the steward of and key contributor to Java requires a lot of sausage making.  They admitted to Oracle's fair share of Homer Simpson-esque "D'oh" moments in the past year, which was complicated by Oracle's communication style.   "Oracle has a tradition has a saying a few things and sticking by then, in contrast to Sun who was much more open," Adam explained. "We laid out the Java roadmap and are executing on it, and we hope that speaks to our commitment."Java SEAdam talked about having a long term perspective on the Java language (20+ years), letting ideas mature in more experimental languages, then bringing them into Java. Current priorities include: JVM convergence (getting the best features of JRockit into Hotspot); support of parallel/multi-core programming, and of course, all the improvements in JDK7. The JDK7 Developer Preview is underway (please download now and report bugs!). The Oracle development team is also working on Lambda and modularity (Jigsaw) for SE 8. Less certain, but also under discussion are improvements for Java SE 9. Adam is thinking of it as a "back to basics" release. He mentioned reworking JNI, improving data integration and improved device support.Java EE To provide context about Java EE, Steve said Java EE was great at getting businesses on the internet. The success of Java EE resulted in an incredible expansion of the middleware marketplace for developers and vendors.  But with success, came more. Java EE kept piling on capabilities, but that created excess baggage.  Doing simple things was no longer so simple. That's where Java community is so valuable: "When Java EE was too complex and heavyweight, many people were happy to tell us what we were doing wrong and popularize solutions," Steve explained. Because of that feedback, the Java EE teams focused on making things simple again: POJOs and annotations, and leveraging changes in Java SE.  Steve said that "innovation doesn't happen in expert groups, it happens on the ground where developers are solving problems," and platform stewards need to pay attention and take advantage of changes that are taking place.Enter the Cloud "Developers are restless, they want cloud functionality from their own IT dept" Steve explained. With the cloud, the scope of problem has expanded to include the data center itself, with multiple tenants. To move forward, existing APIs in Java EE need to be updated to be tenant-aware, service-enabled, and EE needs to support various styles of deployment. The goal is to get all that done in Java EE 8.Adam questioned Steve about timing and schedule. "Yes, the schedule is aggressive, but it'll work" Steve said. Then Adam asked about modularization. If Java SE 8 comes out at the end of 2012, when can Java EE deliver modularization? Steve suggested that key stakeholders can come with up some pre-SE 8 agreement on how to expose the metadata about modules. He then alluded to Mark Reinhold and John Duimovich's keynote at EclipseCON next week. Stay tuned.Evil Master PlanIn conclusion, Adam finally admitted to Oracle's Evil Master Plan: 1) Invest in and improve Java SE and EE 2) Collaborate with the community 3) Broaden the marketplace for Java development. Bwaaaaaaaaahahaha! <rubs hands together>Key LinksJDK7 Developer Preview  http://jdk7.java.net/preview/Oracle Technology Network http://www.oracle.com/technetwork/java/index.htmlTheServerSide Java Symposium  http://javasymposium.techtarget.com/"Utopia or Deuteranopia?" http://en.wikipedia.org/wiki/Aeon_Flux

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 1/ 2

    - by Sanjeevio
    Financial institutions view compliance as a regulatory burden that incurs a high initial capital outlay and recurring costs. By its very nature regulation takes a prescriptive, common-for-all, approach to managing financial and non-financial risk. Needless to say, no longer does mere compliance with regulation will lead to sustainable differentiation.  Genuine competitive advantage will stem from being able to cope with innovation demands of the present economic environment while meeting compliance goals with regulatory mandates in a faster and cost-efficient manner. Let’s first take a look at the key factors that are limiting the pursuit of the above goal. Regulatory requirements are growing, driven in-part by revisions to existing mandates in line with cross-border, pan-geographic, nature of financial value chains today and more so by frequent systemic failures that have destabilized the financial markets and the global economy over the last decade.  In addition to the increase in regulation, financial institutions are faced with pressures of regulatory overlap and regulatory conflict. Regulatory overlap arises primarily from two things: firstly, due to the blurring of boundaries between lines-of-businesses with complex organizational structures and secondly, due to varying requirements of jurisdictional directives across geographic boundaries e.g. a securities firm with operations in US and EU would be subject different requirements of “Know-Your-Customer” (KYC) as per the PATRIOT ACT in US and MiFiD in EU. Another consequence and concomitance of regulatory change is regulatory conflict, which again, arises primarily from two things: firstly, due to diametrically opposite priorities of line-of-business and secondly, due to tension that regulatory requirements create between shareholders interests of tighter due-diligence and customer concerns of privacy. For instance, Customer Due Diligence (CDD) as per KYC requires eliciting detailed information from customers to prevent illegal activities such as money-laundering, terrorist financing or identity theft. While new customers are still more likely to comply with such stringent background checks at time of account opening, existing customers baulk at such practices as a breach of trust and privacy. As mentioned earlier regulatory compliance addresses both financial and non-financial risks. Operational risk is a non-financial risk that stems from business execution and spans people, processes, systems and information. Operational risk arising from financial processes in particular transcends other sources of such risk. Let’s look at the factors underpinning the operational risk of financial processes. The rapid pace of innovation and geographic expansion of financial institutions has resulted in proliferation and ad-hoc evolution of back-office, mid-office and front-office processes. This has had two serious implications on increasing the operational risk of financial processes: ·         Inconsistency of processes across lines-of-business, customer channels and product/service offerings. This makes it harder for the risk function to enforce a standardized risk methodology and in turn breaches harder to detect. ·         The proliferation of processes coupled with increasingly frequent change-cycles has resulted in accidental breaches and increased vulnerability to regulatory inadequacies. In summary, regulatory growth (including overlap and conflict) coupled with process proliferation and inconsistency is driving process compliance complexity In my next post I will address the implications of this process complexity on financial institutions and outline the role of BPM in lowering specific aspects of operational risk of financial processes.

    Read the article

  • PASS: International Travels

    - by Bill Graziano
    Nihao!  One of the largest changes PASS is going through is the the expansion outside the US and Canada.  We’ve had international chapters and events in Europe since the early 2000’s.  But nothing on the scale we’re seeing now.  Since January 1st there have been 18 SQL Saturday events outside North America and 19 events in North America.  We hope to have three international SQLRally events outside the US in FY13 (budget willing).  I don’t know the exact percentage of chapters outside the US but it’s got be 50% or higher. We recently started an effort to remake the Board to better reflect the growing global face of PASS.  This involves assigning some Board seats to geographic regions.  You can ask questions about this in our feedback forum, participate in a Twitter chat or ask questions directly of Board members.  You can email me at if you’d like to ask a question directly.  We’re doing this very slowly and deliberately in hopes that a long communication cycle gives us a chance to address all the issues that our members will raise. After the Summit we passed a budget exception allocating an extra $20,000 for Board members to travel to local events.  I think it’s important for Board members to visit new areas and talk to more of our members.  I sent out an email asking where people had attended events outside their home city.  Here’s the list I got back: Albuquerque, Amsterdam, Boston, Brisbane, Chicago, Colorado Springs, Columbus, Dallas, Houston, Jacksonville, Las Vegas, London, Louisville, Minneapolis, New York City, Orange County, Orlando, Pensacola, Perth, Philadelphia, Phoenix, Redmond, Seattle, Silicon Valley, Sydney, Tampa Bay, Vancouver, Washington DC and Wellington.  (Disclaimer: Some of this travel was paid for by employers or Board members themselves.  Some of this travel may have been completed before the Summit.  That’s still one heck of a list!) The last SQL Saturday event this fiscal year is SQL Saturday Shanghai.  And that’s one I’m attending.  This is our first event in China and is being put on in cooperation with the local Microsoft office.  Hopefully this event will be the start of a growing community in China that includes chapters, SQL Saturdays and maybe a SQLRally or two in the future.  I’m excited to speak with people that are just starting down this path and watching this community grow. I encourage you to visit the PASS Global Growth site and read through the material there.  This is the biggest change we’ve made to our governance since I’ve been on the Board.  You need to understand how it affects you and how it affects the organization. And wish me luck on the 15 hour flight to Shanghai on Friday afternoon.  Rob Farley flies from Australia to the US for PASS events multiple times per year and I don’t know how he does it so often.  I think one of these is going to wipe me out.  (And Nihao (knee-how) is Chinese for Hello.)

    Read the article

  • BI&EPM in Focus Sep 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers ·       Iluka Resources Improves Business Insight into Mining Operations Through Significantly Faster, Customized Analyses ·       Waikato Regional Council Consolidates Financial Reports up to 10 Times Faster  ·       Lojas Renner Shortens Budget Consolidation from Three Days to 15 Minutes; Improves Data Quality, and Supports Aggressive Expansion Plans  ·       Link to Complete Archive ·       Profit Magazine article featuring General Dynamics: RECONnomics: Integrate. Innovate. Grow (link) ·       Video: Goodhope Asia Unifies Financial Data with Oracle Hyperion (link)   Enterprise Performance Management ·       Oracle University Training on Demand: 1.     Oracle Hyperion Financial Reporting 11.1.2 for Financial Management (link) 2.     Oracle Essbase Bootcamp: On Demand (link) 3.     Oracle Hyperion Planning 11.1.2: Create & Manage Applications On Demand (link)   Business Intelligence ·       Oracle University Training on Demand: 1.     Learn How to Create Analyses, Dashboards with OBI 11g (link) 2.     Build Repositories with Oracle Business Intelligence 11g (link) 3.     Oracle BI Publisher 11g Fundamentals (link) 4.     Oracle BI Applications Courses now available for 7.9.6  (link) 5.     Oracle BI 11g: New Features and Exalytics ·       Oracle Business Intelligence Release 11.1.1.6.2BP, updated information: 1.     Oracle BI Mobile at the Speed of Thought 2.     What's New in Oracle Business Intelligence Mobile 3.     What's New in Oracle Business Intelligence Visualizations 4.     Oracle Business Intelligence on Oracle.com 5.     Download the New Release ·       Discover How to Turn Data into Insight: Big Data Guide : Whitepaper and set of short Videos. ·       New OPN Specialisation Exam for OBI 11g Certification . ·       Lastest BIC2g and Exalytics Demonstration VMs for Partners . ·       New Version 2.3 Oracle Endeca Information Discovery and Server now available . ·       New Oracle BI Publisher 11.1.1.6 Trial Edition Now Available .

    Read the article

  • 2-column; multi-accordion pane

    - by Josh
    Alright, I'm having some issues and I believe it's a CSS one. Here is what I'm working on currently: http://www.notedls.com/demo/ Focusing on the News accordion menu. The idea here is to have a small image (50x50 with padding) and then a huge headline next to it. When the user clicks the headline, it expands to the article. If the user wants to read comments or make a comment themselves they can then click the View Comments to expand it even further. The issue I'm having (if it isn't clear) is the spacing with the image and the text. I could simply just increase the height of the ui.accordion-acc or -left to make everything fit, but that doesn't solve the issue. If you notice when you click on the first expansion of Headline 1, it will wrap View Comments underneath the image. This is something I don't want, I've tried separating these elements into additional divs and even floating, but its just not working. Essentially, I want blank space infinitely underneath the image for however long the article+comments may take the field.

    Read the article

  • Joining on a common table, how do you get a FULL OUTER JOIN to expand on another table?

    - by stimpy77
    I've scoured StackOverflow and Google for an answer to this problem. I'm trying to create a Microsot SQL Server 2008 view. Not a stored procedure. Not a function. Just a query (i.e. a view). I have three tables. The first table defines a common key, let's say "CompanyID". The other two tables have a sometimes-common field, let's say "EmployeeName". I want a single table result that, when my WHERE clause says "WHERE CompanyID = 12" looks like this: CompanyID | TableA | TableB 12 | John Doe | John Doe 12 | Betty Sue | NULL 12 | NULL | Billy Bob I've tried a FULL OUTER JOIN that looks like this: SELECT Company.CompanyID, TableA.EmployeeName, TableB.EmployeeName FROM Company FULL OUTER JOIN TableA ON Company.CompanyID = TableA.CompanyID FULL OUTER JOIN TableB ON Company.CompanyID = TableB.CompanyID AND (TableA.EmployeeName IS NULL OR TableB.EmployeeName IS NULL OR TableB.EmployeeName = TableA.EmployeeName) I'm only getting the NULL from one matched table, I'm not getting the expansion for the other table. In the above sample, I'm basically only getting the first and third rows and not the second. Can someone help me create this query and show me how this is done correctly? BTW I already have a stored procedure that looks very clean and populates an in-memory table, but that isn't what I want. Thanks.

    Read the article

  • Function approximation with Maclaurin series

    - by marines
    I need to approx (1-x)^0.25 with given accuracy (0.0001 e.g.). I'm using expansion found on Wikipedia for (1+x)^0.25. I need to stop approximating when current expression is less than the accuracy. long double s(long double x, long double d) { long double w = 1; long double n = 1; // nth expression in series long double tmp = 1; // sum while last expression is greater than accuracy while (fabsl(tmp) >= d) { tmp *= (1.25 / n - 1) * (-x); // the next expression w += tmp; // is added to approximation n++; } return w; } Don't mind long double n. :P This works well when I'm not checking value of current expression but when I'm computing 1000 or more expressions. Domain of the function is <-1;1 and s() calculates approximation well for x in <-1;~0.6. The bigger the argument is the bigger is the error of calculation. From 0.6 it exceeds the accuracy. I'm not sure if the problem is clear enough because I don't know English math language well. The thing is what's the matter with while condition and why the function s() doesn't approximate correctly.

    Read the article

  • Graph navigation problem

    - by affan
    I have graph of components and relation between them. User open graph and want to navigate through the graph base on his choice. He start with root node and click expand button which reveal new component that is related to current component. The problem is with when use decide to collapse a node. I have to choose a sub-tree to hide and at same time leave graph in consistent state so that there is no expanded node with missing relation to another node in graph. Now in case of cyclic/loop between component i have difficult of choosing sub-tree. For simplicity i choose the order in which they were expanded. So if a A expand into B and C collapse A will hide the nodes and edge that it has created. Now consider flowing scenario. [-] mean expanded state and [+] mean not yet expanded. A is expanded to reveal B and C. And then B is expanded to reveal D. C is expanded which create a link between C and exiting node D and also create node E. Now user decide to collapse B. Since by order of expansion D is child of B it will collapse and hide D. This leave graph in inconsistent state as C is expanded with edge to D but D is not anymore there if i remove CD edge it will still be inconsistent. If i collapse C. And E is again a cyclic link e.g to B will produce the same problem. /-----B[-]-----\ A[-] D[+] \-----C[-]-----/ \ E[+] So guys any idea how can i solve this problem. User need to navigate through graph and should be able to collapse but i am stuck with problem of cyclic nodes in which case any of node in loop if collapse will leave graph in inconsistent state.

    Read the article

  • Java SortedMap to Scala TreeMap

    - by Dave
    I'm having trouble converting a java SortedMap into a scala TreeMap. The SortedMap comes from deserialization and needs to be converted into a scala structure before being used. Some background, for the curious, is that the serialized structure is written through XStream and on desializing I register a converter that says anything that can be assigned to SortedMap[Comparable[_],_] should be given to me. So my convert method gets called and is given an Object that I can safely cast because I know it's of type SortedMap[Comparable[_],_]. That's where it gets interesting. Here's some sample code that might help explain it. // a conversion from comparable to ordering scala> implicit def comparable2ordering[A <: Comparable[A]](x: A): Ordering[A] = new Ordering[A] { | def compare(x: A, y: A) = x.compareTo(y) | } comparable2ordering: [A <: java.lang.Comparable[A]](x: A)Ordering[A] // jm is how I see the map in the converter. Just as an object. I know the key // is of type Comparable[_] scala> val jm : Object = new java.util.TreeMap[Comparable[_], String]() jm: java.lang.Object = {} // It's safe to cast as the converter only gets called for SortedMap[Comparable[_],_] scala> val b = jm.asInstanceOf[java.util.SortedMap[Comparable[_],_]] b: java.util.SortedMap[java.lang.Comparable[_], _] = {} // Now I want to convert this to a tree map scala> collection.immutable.TreeMap() ++ (for(k <- b.keySet) yield { (k, b.get(k)) }) <console>:15: error: diverging implicit expansion for type Ordering[A] starting with method Tuple9 in object Ordering collection.immutable.TreeMap() ++ (for(k <- b.keySet) yield { (k, b.get(k)) })

    Read the article

  • Swing: what to do when a JTree update takes too long and freezes other GUI elements?

    - by java.is.for.desktop
    Hello, everyone! I know that GUI code in Java Swing must be put inside SwingUtilities.invokeAndWait or SwingUtilities.invokeLater. This way threading works fine. Sadly, in my situation, the GUI update it that thing which takes much longer than background thread(s). More specific: I update a JTree with about just 400 entries, nesting depth is maximum 4, so should be nothing scary, right? But it takes sometimes one second! I need to ensure that the user is able to type in a JTextPane without delays. Well, guess what, the slow JTree updates do cause delays for JTextPane during input. It refreshes only as soon as the tree gets updated. I am using Netbeans and know empirically that a Java app can update lots of information without freezing the rest of the UI. How can it be done? NOTE 1: All those DefaultMutableTreeNodes are prepared outside the invokeAndWait. NOTE 2: When I replace invokeAndWait with invokeLater the tree doesn't get updated. NOTE 3: Fond out that recursive tree expansion takes far the most time. NOTE 4: I'm using custom tree cell renderer, will try without and report. NOTE 4a: My tree cell renderer uses a map to cache and reuse created JTextComponents, depending on tree node (as a key). CLUE 1: Wow! Without setting custom cell renderer it's 10 times faster. I think, I'll need few good tutorials on writing custom tree cell renderers.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >