Search Results

Search found 18329 results on 734 pages for 'interpret order'.

Page 124/734 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • How to indicate reliability when reporting availability of competencies

    - by Jan Doggen
    We have employees with competencies: Pete Welder Carpenter Melissa Carpenter Assume they both work 40 hours/week, and have not yet been assigned work. We need to report the availability of these competencies, expressed in hours. As far as I can see now, we can report this in two ways: Method A. When someone has multiple competencies, count them both. Welder 40 hours Carpenter 80 hours Method B. When someone has multiple competencies, count an equal division of hours for each Welder 20 hours Carpenter 60 hours Method A has our preference: - A good planner will know to plan the least available competency first. If 30 hours of welding is planned, we will be left with 10 welder, 50 carpenter. - Method B has the disadvantage that the planner thinks he cannot plan the job when 30 hours of welding is required. However, if we report this we would like to give an estimate of the reliability of the numbers for each competency, i.e. how much are these over-reported? In my example A, would I say that carpenter is 100% over-reported, or 50%, or maybe another number? How would I calculate this for large numbers of competencies? I'm sure we are not the first ones dealing with this, is there a 'usual' way of doing this in planning? Additionally: - Would there be an even better method than A or B? - Optionally, we also have an preference order of competencies (like: use him/her in this order), Pete could be 1. welder 2. carpenter. Does this introduce new options?

    Read the article

  • allowing index access only with .htaccess

    - by YsoL8
    Hello I have this in my .htaccess file, in the site root: Options -Indexes <directory ../.*> Deny from all </directory> <Files .htaccess> order allow,deny deny from all </Files> <Files index.php> Order allow,deny allow from all </Files> What I'm trying to achieve is to block folder and file access to anything that isn't called index.php, regardless of which directory is accessed. I have the folder part working perfectly and the deny from all rule is working as well - but my attempt to allow access to index.php is failing. Basically could someone tell me how to get it working?

    Read the article

  • allowing index access only with .htaccess

    - by Oliver Nourish
    Hello I have this in my .htaccess file, in the site root: Options -Indexes <directory ../.*> Deny from all </directory> <Files .htaccess> order allow,deny deny from all </Files> <Files index.php> Order allow,deny allow from all </Files> What I'm trying to achieve is to block folder and file access to anything that isn't called index.php, regardless of which directory is accessed. I have the folder part working perfectly and the deny from all rule is working as well - but my attempt to allow access to index.php is failing. Basically could someone tell me how to get it working?

    Read the article

  • allowing index access only with .htaccess

    - by YsoL8
    Hello I have this in my .htaccess file, in the site root: Options -Indexes <directory ../.*> Deny from all </directory> <Files .htaccess> order allow,deny deny from all </Files> <Files index.php> Order allow,deny allow from all </Files> What I'm trying to achieve is to block folder and file access to anything that isn't called index.php, regardless of which directory is accessed. I have the folder part working perfectly and the deny from all rule is working as well - but my attempt to allow access to index.php is failing. Basically could someone tell me how to get it working?

    Read the article

  • Given a database table where multiple rows have the same values and only the most recent record is to be returned

    - by Jim Lahman
    I have a table where there are multiple records with the same value but varying creation dates.  A sample of the database columns is shown here:   1: select lot_num, to_char(creation_dts,'DD-MON-YYYY HH24:MI:SS') as creation_date 2: from coil_setup 3: order by lot_num   LOT_NUM                        CREATION_DATE        ------------------------------ -------------------- 1435718.002                    24-NOV-2010 11:45:54 1440026.002                    17-NOV-2010 06:50:16 1440026.002                    08-NOV-2010 23:28:24 1526564.002                    01-DEC-2010 13:14:04 1526564.002                    08-NOV-2010 22:39:01 1526564.002                    01-NOV-2010 17:04:30 1605920.003                    29-DEC-2010 10:01:24 1945352.003                    14-DEC-2010 01:50:37 1945352.003                    09-DEC-2010 04:44:22 1952718.002                    25-OCT-2010 09:33:19 1953866.002                    20-OCT-2010 18:38:31 1953866.002                    18-OCT-2010 16:15:25   Notice that there are multiple instances of of the same lot number as shown in bold. To only return the most recent instance, issue this SQL statement: 1: select lot_num, to_char(creation_date,'DD-MON-YYYY HH24:MI:SS') as creation_date 2: from 3: ( 4: select rownum r, lot_num, max(creation_dts) as creation_date 5: from coil_setup group by rownum, lot_num 6: order by lot_num 7: ) 8: where r < 100  LOT_NUM                        CREATION_DATE        ------------------------------ -------------------- 2019416.002                    01-JUL-2010 00:01:24 2022336.003                    06-OCT-2010 15:25:01 2067230.002                    01-JUL-2010 00:36:48 2093114.003                    02-JUL-2010 20:10:51 2093982.002                    02-JUL-2010 14:46:11 2093984.002                    02-JUL-2010 14:43:18 2094466.003                    02-JUL-2010 20:04:48 2101074.003                    11-JUL-2010 09:02:16 2103746.002                    02-JUL-2010 15:07:48 2103758.003                    11-JUL-2010 09:02:13 2104636.002                    02-JUL-2010 15:11:25 2106688.003                    02-JUL-2010 13:55:27 2106882.003                    02-JUL-2010 13:48:47 2107258.002                    02-JUL-2010 12:59:48 2109372.003                    02-JUL-2010 20:49:12 2110182.003                    02-JUL-2010 19:59:19 2110184.003                    02-JUL-2010 20:01:03

    Read the article

  • Magento 1.6.2 Catalog Price Rule Problem

    - by robgt
    My Magento system seems to have a slight issue with Catalog Price Rule application. As far as the customer is concerned, all is working perfectly. The problem is that some orders are not being displayed properly in the admin system when I look at the details. The Catalog Price rule appears to not be applied - so when we reconcile our card processor details with those in our backend Sage system, numbers are not tallying up. Magento and out Sage system say the customer paid X, but the card issuer has taken payment of Y. The payment amount is correct due to the Catalog Price Rule. The customer is always paying the correct amount, but because of some issue with Magento, I think the data is possibly not being stored correctly (stored without the catalog price rule discount amount applied). This means that when I look at an order in the admin system, the line item prices that should be affected by the catalog price rule are not - but also the prices in our backend Sage system are incorrect too. We use another piece of software to bring the data into Sage from Magento, so the data must be stored in Magento's database incorrectly somewhere as this software reads out the order information from Magento. Does anyone have any idea what is wrong here, and how it might be fixed? Cheers!

    Read the article

  • Interfaces and Virtuals Everywhere????

    - by David V. Corbin
    First a disclaimer; this post is about micro-optimization of C# programs and does not apply to most common scenarios - but when it does, it is important to know. Many developers are in the habit of declaring member virtual to allow for future expansion or using interface based designs1. Few of these developers think about what the runtime performance impact of this decision is. A simple test will show that this decision can have a serious impact. For our purposes, we used a simple loop to time the execution of 1 billion calls to both non-virtual and virtual implementations of a method that took no parameters and had a void return type: Direct Call:     1.5uS Virtual Call:   13.0uS The overhead of the call increased by nearly an order of magnitude! Once again, it is important to realize that if the method does anything of significance then this ratio drops quite quickly. If the method does just 1mS of work, then the differential only accounts for a 1% decrease in performance. Additionally the method in question must be called thousands of times in order to produce a meaqsurable impact at the application level. Yet let us consider a situation such as the per-pixel processing of a graphics processing application. Here we may have a method which is called millions of times and even the slightest increase in overhead can have significant ramification. In this case using either explicit virtuals or interface based constructs is likely to be a mistake. In conclusion, good design principles should always be the driving force behind descisions such as these; but remember that these decisions do not come for free.   1) When a concrete class member implements an interface it does not need to be explicitly marked as virtual (unless, of course, it is to be overriden in a derived concerete class). Nevertheless, when accessed via the interface it behaves exactly as if it had been marked as virtual.

    Read the article

  • Too much I/O in the morning ?

    - by steveh99999
    Interesting little improvement on a SQL 2005 system I encountered recently….. Some background - this system had a fairly ‘traditional OLTP’ workload ie  heavily used during day – till around 9pm, then had a batch window for several hours, then not much activity in the early hours of the day, until normal workload resumed the following morning. Using perfmon, I noticed that every morning, we would see a big spike in SQL Server I/O when the application started to be used... As it was 2005 I decided to look at what tables were in cache before and after the overnight batch processing ran… ( using DMV equivalent of dbcc memusage that I posted earlier). Here’s what I saw :-     So, contents of data cache split fairly evenly between my 'important/heavily used' tables.   After this:- some application batch processing,backups, DBCC checks and reindexes were run.  A fairly standard batch I'd suggest. Cache contents then looked like this :- Hmmmm – most of cache is now being used by a table I’ve described as ‘unimportant’. Why ? Well, that table was the last to be reindexed…. purely due to luck, as  the reindexing stored procedure performing a loop in alphabetical order through all application tables...  When the application starts to be used again – all this ‘unimportant’ data has to be replaced in cache by data that is heavily used… So, we changed the overnight reindex scripts –  the most heavily accessed tables are now the last to be reindexed. Obvious really, but we did see a significant reduction in early-morning I/O after changing the order of our reindexing.  

    Read the article

  • Bring OS X Error Message window to the front

    - by Debilski
    In OS X, when an application crashes, a window with an error report will appear. That window is by default unreachable by Command+Tab nor does it appear in the Dock. Of course, if by error or on purpose one clicks another window, the error report will go to the background and hide behind the other windows. This is really annoying, because in order to see it, I will have to use Exposé and scan through 20+ Windows in order to find it. (Not to say, that I don’t like Exposé anymore since Snow Leopard made the window sizes all confusingly equal.) Any ideas on how to make the error reports Command+Tabbable?

    Read the article

  • App Pool doesn't respect memory limits

    - by lucuma
    I am dealing with a legacy .NET app that has a memory leak. In order to try and mitigate a run away memory situation, I've set the app pool memory limits from anywhere between 500KB to 500000KB (500MB) however the app pool doesn't seem to respect the settings as I can login and view the physical memory for it (5GB and above no matter what values). This app is killing the server and I can't seem to determine how to adjust the app pool. What settings do you recommend in order to ensure this app pool doesn't exceed around 500mb of memory. Here is an example, the app pool is using 3.5GB of

    Read the article

  • Apache AliasMatch and DirectoryMatch not working?

    - by Alex
    I have the following config - please notice the Alias and Directory equivalent -- uncommented they work as expected but the dynamic/regex based versions don't - any ideas??? <VirtualHost *:80> ServerName temp.dev.local ServerAlias temp.dev.local DocumentRoot "C:\wamp\www\temp\public" <Directory "C:\wamp\www\temp\public"> AllowOverride all Order Allow,Deny Allow from all </Directory> # Alias /private/application/core/page/assets/images/ "C:/wamp/www/temp/private/application/core/page/assets/images/" # <Directory "C:/wamp/www/temp/private/application/core/page/assets/images/"> AliasMatch ^/private/application/(.*)/(.*)/assets/images/ /private/application/$1/$2/assets/images/ <DirectoryMatch "^/private/application/(.*)/(.*)/assets/images/"> Options Indexes FollowSymlinks MultiViews Includes AllowOverride None Order allow,deny Allow from all </DirectoryMatch> </VirtualHost>

    Read the article

  • Oracle Supply Chain at Pella Showcase, April 24-25, 2012

    - by Stephen Slade
    Nothing promtes a product like a grest customer testimony! For nearly a decade, Pella has been holding these 'open-houses' or Showcases as they are called, to illustrate the utilization of Oracle products in their operations. Building custom windows and doors is not an easy task.  With about a trillion combinations of unique sizes, colors and features availalbe, getting the complex multi-unit custom order wrong can be easy to do. I've been to a few of these Showcases and each time,  continually impressed by the precision, best practices and lean disciplines enacted at Pella. Operations representatives and users at Pella, demonstrate the way in which they use Oracle Supply Chain products to deliver fulfillment excellence. Orders are all custom made and delivered in about a week.  Factory tours are conducted and visitors have a chance to see Oracle in operation on the shop floor, driving informational flow and order accuracy in the 99+% range.  It's a must see for anyone considering expansion of their supply chain footprint.  The event is April 24-25 in Pella Iowa, outside Des Moines.   This year, there is a seperate track for CIOs and executives. Register at 1.800.820.5592  - ask for event 10281

    Read the article

  • Sorting a Grid of Data in ASP.NET MVC

    Last week's article, Displaying a Grid of Data in ASP.NET MVC, showed, step-by-step, how to display a grid of data in an ASP.NET MVC application. Last week's article started with creating a new ASP.NET MVC application in Visual Studio, then added the Northwind database to the project and showed how to use Microsoft's Linq-to-SQL tool to access data from the database. The article then looked at creating a Controller and View for displaying a list of product information (the Model). This article builds on the demo application created in Displaying a Grid of Data in ASP.NET MVC, enhancing the grid to include bi-directional sorting. If you come from an ASP.NET WebForms background, you know that the GridView control makes implementing sorting as easy as ticking a checkbox. Unfortunately, implementing sorting in ASP.NET MVC involves a bit more work than simply checking a checkbox, but the quantity of work isn't significantly greater and with ASP.NET MVC we have more control over the grid and sorting interface's layout and markup, as well as the mechanism through which sorting is implemented. With the GridView control, sorting is handled through form postbacks with the sorting parameters - what column to sort by and whether to sort in ascending or descending order - being submitted as hidden form fields. In this article we'll use querystring parameters to indicate the sorting parameters, which means a particular sort order can be indexed by search engines, bookmarked, emailed to a colleague, and so on - things that are not possible with the GridView's built-in sorting capabilities. Like with its predecessor, this article offers step-by-step instructions and includes a complete, working demo available for download at the end of the article. Read on to learn more! Read More >

    Read the article

  • OLL Live webcast - Using SQL for Pattern Matching in Oracle Database

    - by KLaker
    If you are interested in learning about our exciting new 12c SQL pattern matching feature then mark your diaries. On Wednesday, October 30th at 8:00 am (US/Pacific time zone) Supriya Ananth, who is one of our top curriculum developers at Oracle, will be hosting an OLL webcast on our new SQL pattern matching feature. The ability to recognize patterns in a sequence of rows has been a capability that was widely desired, but not possible with SQL until now. Row pattern matching in native SQL improves application and development productivity and query efficiency for row-sequence analysis. With Oracle Database 12c you can use the new MATCH_RECOGNIZE clause to perform pattern matching in SQL to do the following: Logically partition and order the data using the PARTITION BY and ORDER BY clauses Use regular expressions syntax to define patterns of rows to seek using the PATTERN clause. These patterns a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. For more information and to register for this exciting webcast please visit the OLL Live website, see here: https://apex.oracle.com/pls/apex/f?p=44785:145:116820049307135::::P145_EVENT_ID,P145_PREV_PAGE:461,143.  Please note - if the above link does not work then go to OLL (https://apex.oracle.com/pls/apex/f?p=44785:1:) and click the OLL Live icon (upper right, beneath the Login link or logout link if you are already logged in). The pattern matching webcast is listed on the calendar of events on 30 October.

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based?

    - by Parhs
    Hello guys... I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first expirience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year i worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally i believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can tek heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Ofcourse browsers support shortcuts but the way to overide the defaults that browsers uses isnt cross compatible... That is a hudge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness?What kind of application would you use?

    Read the article

  • Large concurrent user performance issues for Apache + mod_jk + GlassFish v3.1 clusters

    - by user10035
    I am running a java ee 6 ear application on a GlassFish v3.1 ( 2 clusters with 2 instances each) load balanced by an Apache v2.2 with mod_jk - all on the same server (Windows Server 2003 R2, Intel Xeon CPU x5670 @2.93Ghz, 6GB RAM, 2 cpus). The web application is accessed by around ~100 users. When they all try to access it at the same time every morning ~8am, the response is very slow while trying to access the main jsf home page. Apart from that I have seen the CPU usage spike upto 99% by the httpd process during the day frequently and I start seeing errors in the mod_jk.log file. [Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or client network problems [Wed Jun 08 08:25:43 2011] [9380:8216] [info] ajp_service::jk_ajp_common.c (2543): (myAppLocalInstance4) sending request to tomcat failed (unrecoverable), because of client write error (attempt=1) Any suggestions on how I can go about improving this? Apache configuration is mostly the default as shown below ServerRoot "C:/Program Files/Apache Software Foundation/Apache2.2" Listen 80 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule include_module modules/mod_include.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule setenvif_module modules/mod_setenvif.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> DocumentRoot "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> <Directory "C:/Program Files/Apache Software Foundation/Apache2.2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.html </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "logs/access.log" common </IfModule> <IfModule alias_module> ScriptAlias /cgi-bin/ "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin/" </IfModule> <Directory "C:/Program Files/Apache Software Foundation/Apache2.2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-mpm.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> LoadModule jk_module modules/mod_jk.so JkWorkersFile conf/workers.properties JkLogFile logs/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat "%w %V %T" JkMount /myApp/* loadbalancerLocal JkMount /myAppRemote/* loadbalancerRemote JkMount /myApp loadbalancerLocal JkMount /myAppRemote loadbalancerRemote The workers.properties config file is: worker.list=loadbalancerLocal,loadbalancerRemote worker.myAppLocalInstance1.type=ajp13 worker.myAppLocalInstance1.host=localhost worker.myAppLocalInstance1.port=8109 worker.myAppLocalInstance1.lbfactor=1 worker.myAppLocalInstance1.socket_keepalive=1 worker.myAppLocalInstance1.socket_timeout=1000 worker.myAppLocalInstance2.type=ajp13 worker.myAppLocalInstance2.host=localhost worker.myAppLocalInstance2.port=8209 worker.myAppLocalInstance2.lbfactor=1 worker.myAppLocalInstance2.socket_keepalive=1 worker.myAppLocalInstance2.socket_timeout=1000 worker.myAppLocalInstance3.type=ajp13 worker.myAppLocalInstance3.host=localhost worker.myAppLocalInstance3.port=8309 worker.myAppLocalInstance3.lbfactor=1 worker.myAppLocalInstance3.socket_keepalive=1 worker.myAppLocalInstance3.socket_timeout=1000 worker.myAppLocalInstance4.type=ajp13 worker.myAppLocalInstance4.host=localhost worker.myAppLocalInstance4.port=8409 worker.myAppLocalInstance4.lbfactor=1 worker.myAppLocalInstance4.socket_keepalive=1 worker.myAppLocalInstance4.socket_timeout=1000 worker.myAppRemoteInstance1.type=ajp13 worker.myAppRemoteInstance1.host=localhost worker.myAppRemoteInstance1.port=8509 worker.myAppRemoteInstance1.lbfactor=1 worker.myAppRemoteInstance1.socket_keepalive=1 worker.myAppRemoteInstance1.socket_timeout=1000 worker.myAppRemoteInstance2.type=ajp13 worker.myAppRemoteInstance2.host=localhost worker.myAppRemoteInstance2.port=8609 worker.myAppRemoteInstance2.lbfactor=1 worker.myAppRemoteInstance2.socket_keepalive=1 worker.myAppRemoteInstance2.socket_timeout=1000 worker.myAppRemoteInstance3.type=ajp13 worker.myAppRemoteInstance3.host=localhost worker.myAppRemoteInstance3.port=8709 worker.myAppRemoteInstance3.lbfactor=1 worker.myAppRemoteInstance3.socket_keepalive=1 worker.myAppRemoteInstance3.socket_timeout=1000 worker.myAppRemoteInstance4.type=ajp13 worker.myAppRemoteInstance4.host=localhost worker.myAppRemoteInstance4.port=8809 worker.myAppRemoteInstance4.lbfactor=1 worker.myAppRemoteInstance4.socket_keepalive=1 worker.myAppRemoteInstance4.socket_timeout=1000 worker.loadbalancerLocal.type=lb worker.loadbalancerLocal.sticky_session=True worker.loadbalancerLocal.balance_workers=myAppLocalInstance1,myAppLocalInstance2,myAppLocalInstance3,myAppLocalInstance4 worker.loadbalancerRemote.type=lb worker.loadbalancerRemote.balance_workers=myAppRemoteInstance1,myAppRemoteInstance2,myAppRemoteInstance3,myAppRemoteInstance4 worker.loadbalancerRemote.sticky_session=True

    Read the article

  • Put a Windows computer to sleep remotely (from a Linux box)

    - by snark
    I'd like to have my Linux box (a QNAP TS-210 NAS) send the order to go to sleep (or hibernation) to my main Windows 7 computer. As the NAS is running Linux, I can't use psshutdown from SysInternals' PsTools. Is there any Linux equivalent? Or some "magic packet" that can order the Win7 computer to sleep. I know I could install a SSH daemon and trigger a shutdown command from the Linux box using ssh, but ideally I do not want to install anything on the Win7 computer. I can install Linux software on the NAS, no problem about this. PHP, python and perl are also available on it.

    Read the article

  • Automatic Excel Script

    - by Thomas
    I am a 6th year medical student and I'm working on my thesis. I have no experience with programming whatsoever, a friend recommended me to post my question here. I am strugling with the following problem: I have data of 400 patients, stored in 400 different excel files. Each file contains 34 columns in a specific order, let's say A to Z. The order is the same in each of these 400 files. Now I need to a make a new excel document that contains the first column of each patient. So I need all the first columns of my 400 different excel files, lined up next to each other in a new document. Preferebally in the form of a automatic script. After that I want to do the exact same thing but for the second column, then the third and so on. This is probably a problem that has already been solved. Otherwise could someone help me out? You have my thanks!

    Read the article

  • OOW content for Pattern Matching....

    - by KLaker
    If you missed my sessions at OpenWorld then don't worry - all the content we used for pattern matching (presentation and hands-on lab) is now available for download. My presentation "SQL: The Best Development Language for Big Data?" is available for download from the OOW Content Catalog, see here: https://oracleus.activeevents.com/2013/connect/sessionDetail.ww?SESSION_ID=9101 For the hands-on lab ("Pattern Matching at the Speed of Thought with Oracle Database 12c") we used the Oracle-By-Example content. The OOW hands-on lab uses Oracle Database 12c Release 1 (12.1) and uses the MATCH_RECOGNIZE clause to perform some basic pattern matching examples in SQL. This lab is broken down into four main steps: Logically partition and order the data that is used in the MATCH_RECOGNIZE clause with its PARTITION BY and ORDER BY clauses. Define patterns of rows to seek using the PATTERN clause of the MATCH_RECOGNIZE clause. These patterns use regular expressions syntax, a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. You can download the setup files to build the ticker schema and the student notes from the Oracle Learning Library. The direct link to the example on using pattern matching is here: http://apex.oracle.com/pls/apex/f?p=44785:24:0::NO:24:P24_CONTENT_ID,P24_PREV_PAGE:6781,2.

    Read the article

  • Strip X Window System from OpenRD (Fedora core 8)

    - by disown
    I just got myself an OpenRD Client embedded box with an old Fedora 8 on it. Runs like a charm, only problem is that the default install has X on it, which I would like to remove in order to free up some space. I found a similar post here on the same topic, but yum groupremove doesn't work in my case, yum grouplist returns nothing, so I presume that no groups are defined. I don't know if this is because fedora 8 didn't have any groups, or because the OpenRD build is special in some way. In any case, I would like some tips on which package(s) to remove in order to remove an as big chunk of X as possible. Thanks in advance.

    Read the article

  • Can I save an Apache environment variable value with SetEnv?

    - by Nicholas Tolley Cottrell
    I am running Apache 2.2 with Tomcat 6 and have several layers of URL rewriting going on in both Apache with RewriteRule and in Tomcat. I want to pass through the original REQUEST_URI that Apache sees so that I can log it properly for "page not found" errors etc. In httpd.conf I have a line: SetEnv ORIG_URL %{REQUEST_URI} and in the mod_jk.conf, I have: JkEnvVar ORIG_URL Which i thought should make the value available via request.getAttribute("ORIG_URL") in Servlets. However, all that I see is "%{REQUEST_URI}", so I assume that SetEnv doesn't interpret the %{...} syntax. What is the right way to get the URL the user requested in Tomcat?

    Read the article

  • StreamInsight Now Available Through Microsoft Update

    - by Roman Schindlauer
    We are pleased to announce that StreamInsight v1.1 is now available for automatic download and install via Microsoft Update globally. In order to enable agile deployment of StreamInsight solutions, you have asked of us a steady cadence of releases with incremental, but highly impactful features and product improvements. Following our StreamInsight 1.0 launch in Spring 2010, we offered StreamInsight 1.1 in Fall 2010 with implicit compatibility and an upgraded setup to support side by side installs. With this setup, your applications will automatically point to the latest runtime, but you still have the choice to point your application back to a 1.0 runtime if you choose to do so. As the next step, in order to enable timely delivery of our releases to you, we are pleased to announce the support for automatic download and install of StreamInsight 1.1 release via Microsoft Update starting this week. If you have a computer: that is subscribed to Microsoft Update (different from Windows Update) has StreamInsight 1.0 installed, and does not yet have StreamInsight 1.1 installed, Microsoft Update will automatically download and install the corresponding StreamInsight 1.1 update side by side with your existing StreamInsight 1.0 installation – across all supported 32-bit and 64-bit Windows operating systems, across 11 supported languages, and across StreamInsight client and server SKUs. This is also supported in WSUS environments, if all your updates are managed from a corporate server (please talk to the WSUS administrator in your enterprise). As an example, if you have SI Client 1.0 DEU and SI Server 1.0 ENU installed on the same computer, Microsoft Update will selectively download and side-by-side install just the SI Client 1.1 DEU and SI Server 1.1 ENU releases. Going forward, Microsoft Update will be our preferred mode of delivery – in addition to support for our download sites, and media based distribution where appropriate. Regards, The StreamInsight Team

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • Open Windows Server 2012/Windows 8 Start Menu

    - by bmccleary
    I realize that Windows Server 2012 (and Windows 8) removed the start menu button and replaced it with moving your mouse to the upper right corner of the screen. This works fine when the desktop is full screen. However, I access all my servers through windowed RDP connections (or through the Hyper-V console window) and in this case, the desktop is not full screen. Therefore, in order to open the new "start" menu, I have to slowly move my mouse very carefully within the window to just a few pixels within top right corner of the window in order to open the menu. Also, because the session is windowed, the default hot keys (Windows + D, etc.) won't work. There has got to be an easier way. Has anyone else experienced this frustration?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >