Search Results

Search found 14764 results on 591 pages for 'interview questions'.

Page 572/591 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Elegance, thy Name is jQuery

    - by SGWellens
    So, I'm browsing though some questions over on the Stack Overflow website and I found a good jQuery question just a few minutes old. Here is a link to it. It was a tough question; I knew that by answering it, I could learn new stuff and reinforce what I already knew: Reading is good, doing is better. Maybe I could help someone in the process too. I cut and pasted the HTML from the question into my Visual Studio IDE and went back to Stack Overflow to reread the question. Dang, someone had already answered it! And it was a great answer. I never even had a chance to start analyzing the issue. Now I know what a one-legged man feels like in an ass-kicking contest. Nevertheless, since the question and answer were so interesting, I decided to dissect them and learn as much as possible. The HTML consisted of some divs separated by h3 headings.  Note the elements are laid out sequentially with no programmatic grouping: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div></form></body>  The requirement was to wrap a div around each h3 heading and the subsequent divs grouping them into sections. Why? I don't know, I suppose if you screen-scrapped some HTML from another site, you might want to reformat it before displaying it on your own. Anyways… Here is the marvelously, succinct posted answer: $('.heading').each(function(){ $(this).nextUntil('.heading').andSelf().wrapAll('<div class="section">');}); I was familiar with all the parts except for nextUntil and andSelf. But, I'll analyze the whole answer for completeness. I'll do this by rewriting the posted answer in a different style and adding a boat-load of comments: function Test(){ // $Sections is a jQuery object and it will contain three elements var $Sections = $('.heading'); // use each to iterate over each of the three elements $Sections.each(function () { // $this is a jquery object containing the current element // being iterated var $this = $(this); // nextUntil gets the following sibling elements until it reaches // an element with the CSS class 'heading' // andSelf adds in the source element (this) to the collection $this = $this.nextUntil('.heading').andSelf(); // wrap the elements with a div $this.wrapAll('<div class="section" >'); });}  The code here doesn't look nearly as concise and elegant as the original answer. However, unless you and your staff are jQuery masters, during development it really helps to work through algorithms step by step. You can step through this code in the debugger and examine the jQuery objects to make sure one step is working before proceeding on to the next. It's much easier to debug and troubleshoot when each logical coding step is a separate line. Note: You may think the original code runs much faster than this version. However, the time difference is trivial: Not enough to worry about: Less than 1 millisecond (tested in IE and FF). Note: You may want to jam everything into one line because it results in less traffic being sent to the client. That is true. However, most Internet servers now compress HTML and JavaScript by stripping out comments and white space (go to Bing or Google and view the source). This feature should be enabled on your server: Let the server compress your code, you don't need to do it. Free Career Advice: Creating maintainable code is Job One—Maximum Priority—The Prime Directive. If you find yourself suddenly transferred to customer support, it may be that the code you are writing is not as readable as it could be and not as readable as it should be. Moving on… I created a CSS class to see the results: .section{ background-color: yellow; border: 2px solid black; margin: 5px;} Here is the rendered output before:   …and after the jQuery code runs.   Pretty Cool! But, while playing with this code, the logic of nextUntil began to bother me: What happens in the last section? What stops elements from being collected since there are no more elements with the .heading class? The answer is nothing.  In this case it stopped because it was at the end of the page.  But what if there were additional HTML elements? I added an anchor tag and another div to the HTML: <h3 class="heading">Heading 1</h3> <div>Content</div> <div>More content</div> <div>Even more content</div><h3 class="heading">Heading 2</h3> <div>some content</div> <div>some more content</div><h3 class="heading">Heading 3</h3> <div>other content</div><a>this is a link</a><div>unrelated div</div> </form></body> The code as-is will include both the anchor and the unrelated div. This isn't what we want.   My first attempt to correct this used the filter parameter of the nextUntil function: nextUntil('.heading', 'div')  This will only collect div elements. But it merely skipped the anchor tag and it still collected the unrelated div:   The problem is we need a way to tell the nextUntil function when to stop. CSS selectors to the rescue: nextUntil('.heading, a')  This tells nextUntil to stop collecting sibling elements when it gets to an element with a .heading class OR when it gets to an anchor tag. In this case it solved the problem. FYI: The comma operator in a CSS selector allows multiple criteria.   Bingo! One final note, we could have broken the code down even more: We could have replaced the andSelf function here: $this = $this.nextUntil('.heading, a').andSelf(); With this: // get all the following siblings and then add the current item$this = $this.nextUntil('.heading, a');$this.add(this);  But in this case, the andSelf function reads real nice. In my opinion. Here's a link to a jsFiddle if you want to play with it. I hope someone finds this useful Steve Wellens CodeProject

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • How to resolve "HTTP/1.1 403 Forbidden" errors from iCal/CalDAV server after upgrade to Snow Leopard Server?

    - by morgant
    We recently upgraded our Open Directory Master & Replica to Mac OS X 10.6.4 Snow Leopard Server. We had a mismatched server FQDN & LDAP Search Base/Kerberos Realm, so we exported all users & groups, created the new Open Directory Master w/matching FQDN & Search Base/Realm, reimported users & groups, and re-bound all servers & workstations to the new OD Master. At the same time as all of this, we upgraded our iCal/CalDAV server to Mac OS X 10.6.4 Snow Leopard Server. Ever since doing so, we've seen the following issues with our iCal/CalDAV server and iCal clients on both Mac OS X 10.5 Leopard & Mac OS X 10.6: If a user attempts to move or delete an event (single or repeating) that was created prior to the upgrade to 10.6 Server, they get the following error: Access to "blah" in "blah" in account "blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. New users added to the directory get the following error when attempting to add their account to in iCal's preferences: The user "blah" has no configured pricipals. Confirm with your network administrator that your account has at least one CalDAV principal configured. Interestingly, we've since discovered that users seem to be able to delete individual events from an old repeating event without error, but that's a massive amount of work to get rid of a repeating event. I will note that we have not yet added an SRV record in DNS as instructed on page 19 of iCal_Server_Admin_v10.6.pdf. Further Investigation: In this particular case, a user is attempting to decline repeating events created prior to the upgrade to Snow Leopard Server. Granting the user full write access with sudo calendarserver_manage_principals --add-write-proxy users:user1 users:user2 (which did work) doesn't allow deletion of the events. Still get the usual error: Access to "blah blah" in "blah blah" in account "blah blah" is not permitted. The server responded: "HTTP/1.1 403 Forbidden" to operation CalDAVWriteEntityQueueableOperation. The error that shows up in /var/log/caldavd/error.log on the iCal Server when attempting to delete one of the events is: 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.extensions#info] PUT /calendars/__uids__/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/calendar/XXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.ics HTTP/1.1 2011-03-17 15:14:30-0400 [-] [caldav-8009] [PooledMemCacheProtocol,client] [twistedcaldav.scheduling.implicit#error] Cannot change ORGANIZER: UID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX And the error in /var/log/system.log on the client is: Mar 17 15:14:30 192-168-21-169-dhcp iCal[33509]: CalDAV CalDAVWriteEntityQueueableOperation failed: status 'HTTP/1.1 403 Forbidden' request:\n\nBEGIN:VCALENDAR^M\nVERSION:2.0^M\nPRODID:-//Apple Inc.//iCal 3.0//EN^M\nCALSCALE:GREGORIAN^M\nBEGIN:VTIMEZONE^M\nTZID:US/Eastern^M\nBEGIN:DAYLIGHT^M\nTZOFFSETFROM:-0500^M\nTZOFFSETTO:-0400^M\nDTSTART:20070311T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU^M\nTZNAME:EDT^M\nEND:DAYLIGHT^M\nBEGIN:STANDARD^M\nTZOFFSETFROM:-0400^M\nTZOFFSETTO:-0500^M\nDTSTART:20071104T020000^M\nRRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU^M\nTZNAME:EST^M\nEND:STANDARD^M\nEND:VTIMEZONE^M\nBEGIN:VEVENT^M\nSEQUENCE:5^M\nDTSTART;TZID=US/Eastern:20090117T094500^M\nDTSTAMP:20081227T143043Z^M\nSUMMARY:blah blah^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT:urn:uuid^M\n :XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nATTENDEE;CN="First Last";CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:mailto:user@d^M\n omain.tld^M\nEXDATE;TZID=US/Eastern:20110319T094500^M\nDTEND;TZID=US/Eastern:20090117T183000^M\nRRULE:FREQ=WEEKLY;INTERVAL=1^M\nTRANSP:OPAQUE^M\nUID:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX^M\nORGANIZER;CN="First Last":mailto:[email protected]^M\nX-WR-ITIPSTATUSML:UNCLEAN^M\nCREATED:20110317T191348Z^M\nEND:VEVENT^M\nEND:VCALENDAR^M\n\n\n... response:\nHTTP/1.1 403 Forbidden^M\nDate: Thu, 17 Mar 2011 19:14:30 GMT^M\nDav: 1, access-control, calendar-access, calendar-schedule, calendar-auto-schedule, calendar-availability, inbox-availability, calendar-proxy, calendarserver-private-events, calendarserver-private-comments, calendarserver-principal-property-search^M\nContent-Type: text/xml^M\nContent-Length: 134^M\nServer: Twisted/8.2.0 TwistedWeb/8.2.0 TwistedCalDAV/2.5 (iCal Server v12.56.21)^M\n^M\n<?xml version='1.0' encoding='UTF-8'?><error xmlns='DAV:'>^M\n <valid-attendee-change xmlns='urn:ietf:params:xml:ns:caldav'/>^M\n</error> One thing I have noticed, and I'm not sure if this has any real effect is that in many of these pre-Snow Leopard Server migration events, the ORGANIZER is specified like the following: ORGANIZER;CN=First Last:mailto:[email protected] But newer ones are more like one of the two following: ORGANIZER;CN=First Last;[email protected];SCHEDULE-STATUS=1 ORGANIZER;CN=First Last;[email protected]:urn:uuid:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX iCal_Server_Admin_v10.6.pdf notes that the ".db.sqlite" files are completely disposable, they're merely a performance cache and are re-built on the fly, so are safe to delete. I did delete the one for the organizer's calendars and it took longer to process the attempted event delete while it rebuilt the database, but still errored out in the end. FWIW the error is thrown by this code: https://trac.calendarserver.org/browser/CalendarServer/trunk/twistedcaldav/scheduling/implicit.py Any further suggestions? I see lots of questions about this in my Google searches, but not solutions and this is a widespread problem on our iCal Server. Now, we mostly try to get users to ignore them (hence the amount of time this question has been open), but every now and then I dig in deeper trying to find the culprit and/or solution.

    Read the article

  • SQL Sentry First Impressions

    - by AjarnMark
    After struggling to defend my SQL Servers from a political attack recently, I realized that I needed better tools to back me up, and SQL Sentry is the leading candidate. A couple of weeks ago, seemingly from out of nowhere, complaints from the business users started coming in that one of the core internal applications was running dramatically slower than normal, and fingers were being pointed at the SQL Server.  Unfortunately, we don’t have a production DBA whose entire job is to monitor and maintain our SQL Servers.  The responsibility falls to me to do the best I can, investing only a small portion of my time, because there are so many other responsibilities to take care of, and our industry is still deep in recession.  I inherited these SQL Servers and have made significant improvements in process and procedure, but I had not yet made the time to take real baseline measurements or keep a really close eye on the performance.  Like many DBAs, I wrote several of my own tools and used the “built-in tools” like Profiler, PerfMon, and sp_who2 (did I mention most of our instances are SQL Server 2000?).  These have all served me well for in-the-moment troubleshooting and maintenance, but they really fell down on the job when I was called upon to “prove” that SQL Server performance was acceptable and more importantly had not degraded recently (i.e. historical comparisons).  I really didn’t have anything from a historical comparison perspective, but I was able to show that current performance was acceptable, and deflect attention back onto other components (which in fact turned out to be the real culprit). That experience dramatically illustrated the need for better monitoring tools.  Coincidentally, I had been talking recently to my boss about the mini nightmare of monitoring several critical and interdependent overnight jobs that operate on separate instances of SQL Server.  Among other tools, I had been using Idera’s SQL Job Manager which is a free tool and did a nice job of showing me job schedules and histories in a nice calendar view.  This worked fairly well, and for the money (did I mention it was free?) it couldn’t be beat.  But it is based on the stored job history in MSDB, and there were other performance problems that we ran into when we started changing the settings for how much job history to retain, in order to be able to look back a month or more in the calendar view.  Another coincidence (if you believe in such things) was that when we had some of those performance challenges, I posted a couple of questions to the #sqlhelp hashtag on Twitter and Greg Gonzalez (@SQLSensei) suggested I check out SQL Sentry’s Event Manager.  At the time, I just thought he worked there, but later found out that he founded the company.  When I took a quick look at the features & benefits, the one that really jumped out at me is Chaining and Queueing which sounded like it would really help with our “interdependent jobs on different servers” issue. I know that is a lot of background story and coincidences, but hopefully you have stuck with me so far, and now we have arrived at the point where last week I downloaded and installed the 30-day trial of the SQL Sentry Power Suite, which is Event Manager plus Performance Advisor.  And I must say that I really like what I see so far.  Here are a few highlights: Great Support.  I had two issues getting the trial setup and monitoring a handful of our servers.  One of which was entirely my fault (missed a security setting in SQL 2008) and the other was mostly my fault (late change to some config settings that were apparently cached and did not get refreshed properly).  In both cases, the support staff at SQL Sentry were very responsive and rather quickly figured out what the cause and fix was for each of them.  This left me with a great impression of the company.  Kudos to them! Chaining and Queueing.  While I have not yet activated this feature, I am very excited about the possibilities.  We have jobs on three different instances of SQL Server that have to be run in a certain order, and each has to finish before the next can successfully begin, and I believe this feature will ensure just that.  It has been a real pain in the backside when one of those jobs runs just a little too long and does not finish before the job on another instance starts, thus triggering a chain reaction of either outright job failures, or worse, successful completion of completely invalid processing. Calendar View.  I really, really like the Event Manager calendar view where I can see all jobs and events across all instances and identify potential resource contention as well as windows of opportunity for maintenance activity.  Very well done, and based on Event Manager’s own database of accumulated historical information rather than querying the source instances every time. Performance Advisor Dashboard History View.  This view let’s me quickly select a date and time range and it displays graphs of key SQL Server and Windows metrics.  This is exactly the thing I needed to answer the “has performance changed recently” question at the beginning of this post. Reporting Services Subscription Jobs with Report Name.  This was a big and VERY pleasant surprise.  If you have ever looked at the list of SQL Server jobs that SQL Server Reporting Services creates when you make a Subscription, you will notice that they all have some sort of GUID as the name of the job.  This is really ugly, and really annoying because when you are just looking at the SQL Agent and Job Activity Monitor, if you see that Job X failed, you really do not have any indication in the name or the properties of the Job itself, as to what Report that was for.  But with SQL Sentry Event Manager you do.  The Jobs list in the Navigator pane in SQL Sentry, amazingly, displays the name of the Report that the Subscription Job is for.  And when you open it to see more details, it shows you the full Reporting Services path to that Report, so you can immediately track it down in the Report Manager in case you want to identify/notify the owner or edit the Subscription information.  I did not expect this at all, but I sure do like it.  HOORAY! That is just my first impressions from using the tools for a few days.  And I haven’t even gotten into how it showed me where I was completely mistaken about one aspect of my SQL Server disk configurations.  I’ll share that lesson in another blog entry.  But I have to say it again, the combination of Event Manager and Performance Advisor working together have really made me a fan.

    Read the article

  • Spacewalk 2.0 provided to manage Oracle Linux systems

    - by wcoekaer
    Oracle Linux customers have a few options to manage and provision their servers. We provide a license to use Oracle Enterprise Manager's Linux OS management, monitoring and provisioning features without additional cost for every server that has an Oracle Linux support subscription. So there is no additional pack to license and no additional per server cost, it's all included in our Basic, Premier and Systems support subscriptions. The nice thing with Oracle Enterprise Manager is that you end up with a single management product that can manage all aspects of your software stack. You have complete insight into the applications running, you have roles and responsibilities, you have third party connectors for storage or other products and it makes it very easy and convenient to correlate data and events when something happens. If you use Oracle VM as well, you end up with a complete cloud portal with selfservice, chargeback, etc... Another, much simpler option, is just using yum. It is very easy to take a server and create directories and expose these through apache as repositories. You can have a simple yum config on each server pointing to a few specific repositories. It requires some manual effort in terms of creating directories, downloading packages and creating local repo files but it's easy to do and for many people a preferred solution. There are also a good number of customers that just connect their servers directly to ULN or to our free update server public-yum. Just to re-iterate, our public-yum servers have all the errata and updates available for free. Now we added another option. Many of our customers have switched from a competing Linux vendor and they had familiarity with their management tools. Switching to Oracle for support is very easy since we don't require changes to the installed servers but we also want to make sure there is a very easy and almost transparent switch for the management tools as well. While Oracle Enterprise Manager is our preferred way of managing systems, we now are offering Spacewalk 2.0 to our customers. The community project can be found here. We have made a few changes to ensure easy and complete support for Oracle Linux, tested it with public-yum, etc.. You can find the rpms in our public-yum repos at http://public-yum.oracle.com/repo/OracleLinux/OL6/. There are repositories for spacewalk server and then for each version (OL5,OL6) and architecture (x86 and x86-64) we have the client repositories as well. Spacewalk itself is only made available for OL6 x86-64. Documentation can be found here. I set it up myself and here are some quick steps on how you can get going in just a matter of minutes: Spacewalk Server Installation : 1) Installing an Oracle Database Use an existing Oracle Database or install a new Oracle Database (Standard or Enterprise Edition) [at this time use 11g, we will add support for 12c in the near future]. This database can be installed on the spacewalk server or on a separate remote server. While Oracle XE might work to create a small sample POC, we do not support the use of Oracle XE, spacewalk repositories can become large and create a significant database workload. Customers can use their existing database licenses, they can download the database with a trial licence from http://edelivery.oracle.com or Oracle Linux subscribers (customers) will be allowed to use the Oracle Database as a spacewalk repository as part of their Oracle Linux subscription at no additional cost. |NOTE : spacewalk requires the database to be configured with the UTF8 characterset. |Installation will fail if your database does not use UTF8. |To verify if your database is configured correctly, run the following command in sqlplus: | |select value from nls_database_parameters where parameter='NLS_CHARACTERSET'; |This should return 'AL32UTF8' 2) Configure the database schema for spacewalk Ideally, create a tablespace in the database to hold the spacewalk schema tables/data; create tablespace spacewalk datafile '/u01/app/oracle/oradata/orcl/spacewalk.dbf' size 10G autoextend on; Create the database user spacewalk (or use some other schema name) in sqlplus. example : create user spacewalk identified by spacewalk; grant connect, resource to spacewalk; grant create table, create trigger, create synonym, create view, alter session to spacewalk; grant unlimited tablespace to spacewalk; alter user spacewalk default tablespace spacewalk; 4) Spacewalk installation and configuration Spacewalk server requires an Oracle Linux 6 x86-64 system. Clients can be Oracle Linux 5 or 6, both 32- and 64bit. The server is only supported on OL6/64bit. The easiest way to get started is to do a 'Minimal' install of Oracle Linux on a server and configure the yum repository to include the spacewalk repo from public-yum. Once you have a system with a minimal install, modify your yum repo to include the spacewalk repo. Example : edit /etc/yum.repos.d/public-yum-ol.repo and add the following lines at the end of the file : [spacewalk] name=spacewalk baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/spacewalk20/server/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Install the following pre-requisite packages on your spacewalk server : oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 rpm -ivh oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64 The above RPMs can be found on the Oracle Technology Network website : http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html As the root user, configure the library path to include the Oracle Instant Client libraries : cd /etc/ld.so.conf.d echo /usr/lib/oracle/11.2/client64/lib oracle-instantclient11.2.conf ldconfig Install spacewalk : # yum install spacewalk-oracle The above yum command should download and install all required packages to run spacewalk on your local server. | NOTE : if you did a full, desktop or workstation installation, | you have to remove the JTA package | BEFORE installing spacewalk-oracle (rpm -e --nodeps jta) Once the installation completes, simply run the spacewalk configuration tool and you are all set. (make sure to run the command with the 2 arguments) spacewalk-setup --disconnected --external-db Answer the questions during the setup, ensure you provide the current database user (example : spacewalk) and password (example : spacewalk) and database server hostname (the standard hostname of the server on which you have deployed the Oracle database) At the end of the setup script, your spacewalk server should be fully configured and you can log into the web portal. Use your favorite browser to connect to the website : http://[spacewalkserverhostname] The very first action will be to create the main admin account.

    Read the article

  • How to Recover From a Virus Infection: 3 Things You Need to Do

    - by Chris Hoffman
    If your computer becomes infected with a virus or another piece of malware, removing the malware from your computer is only the first step. There’s more you need to do to ensure you’re secure. Note that not every antivirus alert is an actual infection. If your antivirus program catches a virus before it ever gets a chance to run on your computer, you’re safe. If it catches the malware later, you have a bigger problem. Change Your Passwords You’ve probably used your computer to log into your email, online banking websites, and other important accounts. Assuming you had malware on your computer, the malware could have logged your passwords and uploaded them to a malicious third party. With just your email account, the third party could reset your passwords on other websites and gain access to almost any of your online accounts. To prevent this, you’ll want to change the passwords for your important accounts — email, online banking, and whatever other important accounts you’ve logged into from the infected computer. You should probably use another computer that you know is clean to change the passwords, just to be safe. When changing your passwords, consider using a password manager to keep track of strong, unique passwords and two-factor authentication to prevent people from logging into your important accounts even if they know your password. This will help protect you in the future. Ensure the Malware Is Actually Removed Once malware gets access to your computer and starts running, it has the ability to do many more nasty things to your computer. For example, some malware may install rootkit software and attempt to hide itself from the system. Many types of Trojans also “open the floodgates” after they’re running, downloading many different types of malware from malicious web servers to the local system. In other words, if your computer was infected, you’ll want to take extra precautions. You shouldn’t assume it’s clean just because your antivirus removed what it found. It’s probably a good idea to scan your computer with multiple antivirus products to ensure maximum detection. You may also want to run a bootable antivirus program, which runs outside of Windows. Such bootable antivirus programs will be able to detect rootkits that hide themselves from Windows and even the software running within Windows. avast! offers the ability to quickly create a bootable CD or USB drive for scanning, as do many other antivirus programs. You may also want to reinstall Windows (or use the Refresh feature on Windows 8) to get your computer back to a clean state. This is more time-consuming, especially if you don’t have good backups and can’t get back up and running quickly, but this is the only way you can have 100% confidence that your Windows system isn’t infected. It’s all a matter of how paranoid you want to be. Figure Out How the Malware Arrived If your computer became infected, the malware must have arrived somehow. You’ll want to examine your computer’s security and your habits to prevent more malware from slipping through in the same way. Windows is complex. For example, there are over 50 different types of potentially dangerous file extensions that can contain malware to keep track of. We’ve tried to cover many of the most important security practices you should be following, but here are some of the more important questions to ask: Are you using an antivirus? – If you don’t have an antivirus installed, you should. If you have Microsoft Security Essentials (known as Windows Defender on Windows 8), you may want to switch to a different antivirus like the free version of avast!. Microsoft’s antivirus product has been doing very poorly in tests. Do you have Java installed? – Java is a huge source of security problems. The majority of computers on the Internet have an out-of-date, vulnerable version of Java installed, which would allow malicious websites to install malware on your computer. If you have Java installed, uninstall it. If you actually need Java for something (like Minecraft), at least disable the Java browser plugin. If you’re not sure whether you need Java, you probably don’t. Are any browser plugins out-of-date? – Visit Mozilla’s Plugin Check website (yes, it also works in other browsers, not just Firefox) and see if you have any critically vulnerable plugins installed. If you do, ensure you update them — or uninstall them. You probably don’t need older plugins like QuickTime or RealPlayer installed on your computer, although Flash is still widely used. Are your web browser and operating system set to automatically update? – You should be installing updates for Windows via Windows Update when they appear. Modern web browsers are set to automatically update, so they should be fine — unless you went out of your way to disable automatic updates. Using out-of-date web browsers and Windows versions is dangerous. Are you being careful about what you run? – Watch out when downloading software to ensure you don’t accidentally click sketchy advertisements and download harmful software. Avoid pirated software that may be full of malware. Don’t run programs from email attachments. Be careful about what you run and where you get it from in general. If you can’t figure out how the malware arrived because everything looks okay, there’s not much more you can do. Just try to follow proper security practices. You may also want to keep an extra-close eye on your credit card statement for a while if you did any online-shopping recently. As so much malware is now related to organized crime, credit card numbers are a popular target.     

    Read the article

  • Combined Likelihood Models

    - by Lukas Vermeer
    In a series of posts on this blog we have already described a flexible approach to recording events, a technique to create analytical models for reporting, a method that uses the same principles to generate extremely powerful facet based predictions and a waterfall strategy that can be used to blend multiple (possibly facet based) models for increased accuracy. This latest, and also last, addition to this sequence of increasing modeling complexity will illustrate an advanced approach to amalgamate models, taking us to a whole new level of predictive modeling and analytical insights; combination models predicting likelihoods using multiple child models. The method described here is far from trivial. We therefore would not recommend you apply these techniques in an initial implementation of Oracle Real-Time Decisions. In most cases, basic RTD models or the approaches described before will provide more than enough predictive accuracy and analytical insight. The following is intended as an example of how more advanced models could be constructed if implementation results warrant the increased implementation and design effort. Keep implemented statistics simple! Combining likelihoods Because facet based predictions are based on metadata attributes of the choices selected, it is possible to generate such predictions for more than one attribute of a choice. We can predict the likelihood of acceptance for a particular product based on the product category (e.g. ‘toys’), as well as based on the color of the product (e.g. ‘pink’). Of course, these two predictions may be completely different (the customer may well prefer toys, but dislike pink products) and we will have to somehow combine these two separate predictions to determine an overall likelihood of acceptance for the choice. Perhaps the simplest way to combine multiple predicted likelihoods into one is to calculate the average (or perhaps maximum or minimum) likelihood. However, this would completely forgo the fact that some facets may have a far more pronounced effect on the overall likelihood than others (e.g. customers may consider the product category more important than its color). We could opt for calculating some sort of weighted average, but this would require us to specify up front the relative importance of the different facets involved. This approach would also be unresponsive to changing consumer behavior in these preferences (e.g. product price bracket may become more important to consumers as a result of economic shifts). Preferably, we would want Oracle Real-Time Decisions to learn, act upon and tell us about, the correlations between the different facet models and the overall likelihood of acceptance. This additional level of predictive modeling, where a single supermodel (no pun intended) combines the output of several (facet based) models into a single prediction, is what we call a combined likelihood model. Facet Based Scores As an example, we have implemented three different facet based models (as described earlier) in a simple RTD inline service. These models will allow us to generate predictions for likelihood of acceptance for each product based on three different metadata fields: Category, Price Bracket and Product Color. We will use an Analytical Scores entity to store these different scores so we can easily pass them between different functions. A simple function, creatively named Compute Analytical Scores, will compute for each choice the different facet scores and return an Analytical Scores entity that is stored on the choice itself. For each score, a choice attribute referring to this entity is also added to be returned to the client to facilitate testing. One Offer To Predict Them All In order to combine the different facet based predictions into one single likelihood for each product, we will need a supermodel which can predict the likelihood of acceptance, based on the outcomes of the facet models. This model will not need to consider any of the attributes of the session, because they are already represented in the outcomes of the underlying facet models. For the same reason, the supermodel will not need to learn separately for each product, because the specific combination of facets for this product are also already represented in the output of the underlying models. In other words, instead of learning how session attributes influence acceptance of a particular product, we will learn how the outcomes of facet based models for a particular product influence acceptance at a higher level. We will therefore be using a single All Offers choice to represent all offers in our combined likelihood predictions. This choice has no attribute values configured, no scores and not a single eligibility rule; nor is it ever intended to be returned to a client. The All Offers choice is to be used exclusively by the Combined Likelihood Acceptance model to predict the likelihood of acceptance for all choices; based solely on the output of the facet based models defined earlier. The Switcheroo In Oracle Real-Time Decisions, models can only learn based on attributes stored on the session. Therefore, just before generating a combined prediction for a given choice, we will temporarily copy the facet based scores—stored on the choice earlier as an Analytical Scores entity—to the session. The code for the Predict Combined Likelihood Event function is outlined below. // set session attribute to contain facet based scores. // (this is the only input for the combined model) session().setAnalyticalScores(choice.getAnalyticalScores); // predict likelihood of acceptance for All Offers choice. CombinedLikelihoodChoice c = CombinedLikelihood.getChoice("AllOffers"); Double la = CombinedLikelihoodAcceptance.getChoiceEventLikelihoods(c, "Accepted"); // clear session attribute of facet based scores. session().setAnalyticalScores(null); // return likelihood. return la; This sleight of hand will allow the Combined Likelihood Acceptance model to predict the likelihood of acceptance for the All Offers choice using these choice specific scores. After the prediction is made, we will clear the Analytical Scores session attribute to ensure it does not pollute any of the other (facet) models. To guarantee our combined likelihood model will learn based on the facet based scores—and is not distracted by the other session attributes—we will configure the model to exclude any other inputs, save for the instance of the Analytical Scores session attribute, on the model attributes tab. Recording Events In order for the combined likelihood model to learn correctly, we must ensure that the Analytical Scores session attribute is set correctly at the moment RTD records any events related to a particular choice. We apply essentially the same switching technique as before in a Record Combined Likelihood Event function. // set session attribute to contain facet based scores // (this is the only input for the combined model). session().setAnalyticalScores(choice.getAnalyticalScores); // record input event against All Offers choice. CombinedLikelihood.getChoice("AllOffers").recordEvent(event); // force learn at this moment using the Internal Dock entry point. Application.getPredictor().learn(InternalLearn.modelArray, session(), session(), Application.currentTimeMillis()); // clear session attribute of facet based scores. session().setAnalyticalScores(null); In this example, Internal Learn is a special informant configured as the learn location for the combined likelihood model. The informant itself has no particular configuration and does nothing in itself; it is used only to force the model to learn at the exact instant we have set the Analytical Scores session attribute to the correct values. Reporting Results After running a few thousand (artificially skewed) simulated sessions on our ILS, the Decision Center reporting shows some interesting results. In this case, these results reflect perfectly the bias we ourselves had introduced in our tests. In practice, we would obviously use a wider range of customer attributes and expect to see some more unexpected outcomes. The facetted model for categories has clearly picked up on the that fact our simulated youngsters have little interest in purchasing the one red-hot vehicle our ILS had on offer. Also, it would seem that customer age is an excellent predictor for the acceptance of pink products. Looking at the key drivers for the All Offers choice we can see the relative importance of the different facets to the prediction of overall likelihood. The comparative importance of the category facet for overall prediction might, in part, be explained by the clear preference of younger customers for toys over other product types; as evident from the report on the predictiveness of customer age for offer category acceptance. Conclusion Oracle Real-Time Decisions' flexible decisioning framework allows for the construction of exceptionally elaborate prediction models that facilitate powerful targeting, but nonetheless provide insightful reporting. Although few customers will have a direct need for such a sophisticated solution architecture, it is encouraging to see that this lies within the realm of the possible with RTD; and this with limited configuration and customization required. There are obviously numerous other ways in which the predictive and reporting capabilities of Oracle Real-Time Decisions can be expanded upon to tailor to individual customers needs. We will not be able to elaborate on them all on this blog; and finding the right approach for any given problem is often more difficult than implementing the solution. Nevertheless, we hope that these last few posts have given you enough of an understanding of the power of the RTD framework and its models; so that you can take some of these ideas and improve upon your own strategy. As always, if you have any questions about the above—or any Oracle Real-Time Decisions design challenges you might face—please do not hesitate to contact us; via the comments below, social media or directly at Oracle. We are completely multi-channel and would be more than glad to help. :-)

    Read the article

  • my webserver with 16GB ram shows all RAM as used, but is it really, see the 'top'

    - by Alex
    I have some questions about my web server. Its a LAMP web server running centos 5.5 and php5, mysql5. The server gets hundreds (maybe thousand) of concurrent users during peak hours. I'm trying to optimize a little and understand "top". From what I can see: all 16GB of my ram have been used up? does that mean that my server needs more memory? My swap is only 2GB, should it be increased? usually during peak hours my server load average first number is about 2.5-3. What could I do to optimize the server so that the load average even during peak doesn't go above 1? In the past I was told a good working server should stay under 1 load, is this still true? Although even during load of 2.5-3, server pages and applications seem to load with pretty good speed. what should the memory size in php.ini be set to? top - 14:30:18 up 2 days, 12:41, 5 users, load average: 1.25, 1.74, 2.92 Tasks: 305 total, 2 running, 302 sleeping, 0 stopped, 1 zombie Cpu(s): 6.3%us, 0.9%sy, 0.0%ni, 92.5%id, 0.2%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 16427200k total, 16111472k used, 315728k free, 3120316k buffers Swap: 2104496k total, 268k used, 2104228k free, 6216756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 29080 apache 15 0 358m 36m 5192 S 20.2 0.2 2:08.40 httpd 29093 apache 18 0 357m 36m 5192 S 18.2 0.2 2:02.52 httpd 29079 apache 15 0 370m 49m 5832 S 10.0 0.3 2:32.14 httpd 1812 apache 15 0 370m 49m 5196 S 7.3 0.3 2:25.30 httpd 5204 apache 15 0 358m 36m 5168 S 5.3 0.2 0:59.28 httpd 29075 apache 15 0 370m 48m 5184 S 3.3 0.3 2:15.93 httpd 9712 apache 15 0 360m 38m 5180 S 3.0 0.2 0:54.81 httpd 29072 apache 16 0 358m 36m 5192 S 2.7 0.2 2:24.43 httpd 6310 apache 17 0 388m 67m 5180 S 2.3 0.4 0:58.85 httpd 8674 apache 15 0 343m 21m 4980 S 2.0 0.1 0:07.91 httpd 29085 apache 15 0 371m 49m 5224 S 2.0 0.3 2:16.86 httpd 29083 apache 15 0 370m 48m 5196 S 1.7 0.3 2:10.64 httpd 5575 apache 15 0 357m 36m 5228 S 1.3 0.2 0:53.78 httpd 29066 apache 15 0 379m 59m 5860 R 1.3 0.4 2:11.93 httpd 29078 apache 15 0 370m 48m 5188 S 1.3 0.3 2:14.52 httpd 29084 apache 15 0 370m 48m 5208 S 1.0 0.3 2:02.49 httpd 29089 apache 15 0 370m 48m 5188 S 1.0 0.3 2:27.61 httpd 29082 apache 15 0 390m 68m 5188 S 0.7 0.4 2:32.48 httpd 29984 apache 15 0 358m 36m 5228 S 0.7 0.2 2:08.32 httpd 3571 root 16 0 13400 1792 848 S 0.3 0.0 2:37.89 top 4419 mysql 15 0 668m 175m 7204 S 0.3 1.1 3:32.25 mysqld 28181 root 15 0 90460 3624 2680 S 0.3 0.0 0:17.60 sshd 29091 apache 15 0 390m 69m 5196 S 0.3 0.4 2:29.99 httpd 32476 root 15 0 12900 1320 848 R 0.3 0.0 0:06.46 top 1 root 15 0 10372 680 572 S 0.0 0.0 0:02.01 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.51 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.06 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:01.45 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.22 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.02 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/7 24 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/7 25 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/7 26 root RT -5 0 0 0 S 0.0 0.0 0:00.19 migration/8 27 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/8 28 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/8 29 root RT -5 0 0 0 S 0.0 0.0 0:00.34 migration/9 30 root 34 19 0 0 0 S 0.0 0.0 0:00.03 ksoftirqd/9 31 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/9 32 root RT -5 0 0 0 S 0.0 0.0 0:00.16 migration/10 33 root 34 19 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/10 34 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/10 35 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/11 36 root 34 19 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/11 37 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/11 38 root RT -5 0 0 0 S 0.0 0.0 0:00.35 migration/12

    Read the article

  • Building an Infrastructure Cloud with Oracle VM for x86 + Enterprise Manager 12c

    - by Richard Rotter
    Cloud Computing? Everyone is talking about Cloud these days. Everyone is explaining how the cloud will help you to bring your service up and running very fast, secure and with little effort. You can find these kinds of presentations at almost every event around the globe. But what is really behind all this stuff? Is it really so simple? And the answer is: Yes it is! With the Oracle SW Stack it is! In this post, I will try to bring this down to earth, demonstrating how easy it could be to build a cloud infrastructure with Oracle's solution for cloud computing.But let me cover some basics first: How fast can you build a cloud?How elastic is your cloud so you can provide new services on demand? How much effort does it take to monitor and operate your Cloud Infrastructure in order to meet your SLAs?How easy is it to chargeback for your services provided? These are the critical success factors of Cloud Computing. And Oracle has an answer to all those questions. By using Oracle VM for X86 in combination with Enterprise Manager 12c you can build and control your cloud environment very fast and easy. What are the fundamental building blocks for your cloud? Oracle Cloud Building Blocks #1 Hardware Surprise, surprise. Even the cloud needs to run somewhere, hence you will need hardware. This HW normally consists of servers, storage and networking. But Oracles goes beyond that. There are Optimized Solutions available for your cloud infrastructure. This is a cookbook to build your HW cloud platform. For example, building your cloud infrastructure with blades and our network infrastructure will reduce complexity in your datacenter (Blades with switch network modules, splitter cables to reduce the amount of cables, TOR (Top Of the Rack) switches which are building the interface to your infrastructure environment. Reducing complexity even in the cabling will help you to manage your environment more efficient and with less risk. Of course, our engineered systems fit into the cloud perfectly too. Although they are considered as a PaaS themselves, having the database SW (for Exadata) and the application development environment (for Exalogic) already deployed on them, in general they are ideal systems to enable you building your own cloud and PaaS infrastructure. #2 Virtualization The next missing link in the cloud setup is virtualization. For me personally, it's one of the most hidden "secret", that oracle can provide you with a complete virtualization stack in terms of a hypervisor on both architectures: X86 and Sparc CPUs. There is Oracle VM for X86 and Oracle VM for Sparc available at no additional  license costs if your are running this virtualization stack on top of Oracle HW (and with Oracle Premier Support for HW). This completes the virtualization portfolio together with Solaris Zones introduced already with Solaris 10 a few years ago. Let me explain how Oracle VM for X86 works: Oracle VM for x86 consists of two main parts: - The Oracle VM Server: Oracle VM Server is installed on bare metal and it is the hypervisor which is able to run virtual machines. It has a very small footprint. The ISO-Image of Oracle VM Server is only 200MB large. It is very small but efficient. You can install a OVM-Server in less than 5 mins by booting the Server with the ISO-Image assigned and providing the necessary configuration parameters (like installing an Linux distribution). After the installation, the OVM-Server is ready to use. That's all. - The Oracle VM-Manager: OVM-Manager is the central management tool where you can control your OVM-Servers. OVM-Manager provides the graphical user interface, which is an Application Development Framework (ADF) application, with a familiar web-browser based interface, to manage Oracle VM Servers, virtual machines, and resources. The Oracle VM Manager has the following capabilities: Create virtual machines Create server pools Power on and off virtual machines Manage networks and storage Import virtual machines, ISO files, and templates Manage high availability of Oracle VM Servers, server pools, and virtual machines Perform live migration of virtual machines I want to highlight one of the goodies which you can use if you are running Oracle VM for X86: Preconfigured, downloadable Virtual Machine Templates form edelivery With these templates, you can download completely preconfigured Virtual Machines in your environment, boot them up, configure them at first time boot and use it. There are templates for almost all Oracle SW and Applications (like Fusion Middleware, Database, Siebel, etc.) available. #3) Cloud Management The management of your cloud infrastructure is key. This is a day-to-day job. Acquiring HW, installing a virtualization layer on top of it is done just at the beginning and if you want to expand your infrastructure. But managing your cloud, keeping it up and running, deploying new services, changing your chargeback model, etc, these are the daily jobs. These jobs must be simple, secure and easy to manage. The Enterprise Manager 12c Cloud provides this functionality from one management cockpit. Enterprise Manager 12c uses Oracle VM Manager to control OVM Serverpools. Once you registered your OVM-Managers in Enterprise Manager, then you are able to setup your cloud infrastructure and manage everything from Enterprise Manager. What you need to do in EM12c is: ">Register your OVM Manager in Enterprise ManagerAfter Registering your OVM Manager, all the functionality of Oracle VM for X86 is also available in Enterprise Manager. Enterprise Manager works as a "Manger" of the Manager. You can register as many OVM-Managers you want and control your complete virtualization environment Create Roles and Users for your Self Service Portal in Enterprise ManagerWith this step you allow users to logon on the Enterprise Manager Self Service Portal. Users can request Virtual Machines in this portal. Setup the Cloud InfrastructureSetup the Quotas for your self service users. How many VMs can they request? How much of your resources ( cpu, memory, storage, network, etc. etc.)? Which SW components (templates, assemblys) can your self service users request? In this step, you basically set up the complete cloud infrastructure. Setup ChargebackOnce your cloud is set up, you need to configure your chargeback mechanism. The Enterprise Manager collects the resources metrics, which are used in a very deep level. Almost all collected Metrics could be used in the chargeback module. You can define chargeback plans based on configurations (charge for the amount of cpu, memory, storage is assigned to a machine, or for a specific OS which is installed) or chargeback on resource consumption (% of cpu used, storage used, etc). Or you can also define a combination of configuration and consumption chargeback plans. The chargeback module is very flexible. Here is a overview of the workflow how to handle infrastructure cloud in EM: Summary As you can see, setting up an Infrastructure Cloud Service with Oracle VM for X86 and Enterprise Manager 12c is really simple. I personally configured a complete cloud environment with three X86 servers and a small JBOD san box in less than 3 hours. There is no magic in it, it is all straightforward. Of course, you have to have some experience with Oracle VM and Enterprise Manager. Experience in setting up Linux environments helps as well. I plan to publish a technical cookbook in the next few weeks. I hope you found this post useful and will see you again here on our blog. Any hints, comments are welcome!

    Read the article

  • CodePlex Daily Summary for Saturday, October 27, 2012

    CodePlex Daily Summary for Saturday, October 27, 2012Popular ReleasesRazorSourceGenerator: RazorSourceGenerator v1.1 Installer: RazorSourceGenerator v1.1 Installer ?? include ??,???????。Fruit Juice: Fruit Juice v1.1: Changelog (v1.1):Minor design fixes; Added live tiles; Added the new Windows Phone Store Download Badge;ZXMAK2: Version 2.6.7.0: - small performance improvements - fix & improvements for Direct3D renderer (thanks to zebest for testing)Media Companion: Media Companion 3.507b: Once again, it has been some time since our release, and there have been a number changes since then. It is hoped that these changes will address some of the issues users have been experiencing, and of course, work continues! New Features: Added support for adding Home Movies. Option to sort Movies by votes. Added 'selectedBrowser' preference used when opening links in an external browser. Added option to fallback to getting runtime from the movie file if not available on IMDB. Added new Big...MSBuild Extension Pack: October 2012: Release Blog Post The MSBuild Extension Pack October 2012 release provides a collection of over 475 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUI...NAudio: NAudio 1.6: Release notes at http://mark-dot-net.blogspot.co.uk/2012/10/naudio-16-release-notes-10th.htmlPowerShell Community Extensions: 2.1 Production: PowerShell Community Extensions 2.1 Release NotesOct 25, 2012 This version of PSCX supports both Windows PowerShell 2.0 and 3.0. See the ReleaseNotes.txt download above for more information.DbDiff: Database Diff and Database Scripting: 1.3.3.5: - Wrong load options (deskey wrong)Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 001: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes a fix for the Netflix example from Chapter 6 that was missing a service reference.PdfReport: PdfReport 1.3: - Removed the limitation of defining non duplicate column names. See DuplicateColumns sample for more info. - Added horizontal stack panel mode. See CharacterMap sample for more info. - Added pdfStamper to onFillAcroForm of PdfTemplate. See QuestionsAcroForm sample for more info. Added 6 new samples (http://pdfreport.codeplex.com/SourceControl/BrowseLatest): - AccountingBalanceColumn - CharacterMap - CustomPriceNumber - DuplicateColumns - QuestionsAcroForm - QuestionsFormUmbraco CMS: Umbraco 4.9.1: Umbraco 4.9.1 is a bugfix release to fix major issues in 4.9.0 BugfixesThe full list of fixes can be found in the issue tracker's filtered results. A summary: Split buttons work again, you can now also scroll easier when the list is too long for the screen Media and Content pickers have information of the full path of the picked item Fixed: Publish status may not be accurate on nodes with large doctypes Fixed: 2 media folders and recycle bins after upgrade to 4.9 The template/code ...AcDown????? - AcDown Downloader Framework: AcDown????? v4.2.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...Rawr: Rawr 5.0.2: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...MCEBuddy 2.x: MCEBuddy 2.3.5: Changelog for 2.3.5 (32bit and 64bit) 1. Fixed a bug causing MCEBuddy to crash during or after installation on Windows XP 2. Bugfix for resource leak with UPnP which would lead to a failure after many days 3. Increased the UPnP discovery re-scan interval from 10 minutes to 30 minutes 4. Added support for specifying TVDB and IMDB id’s in the conversion task page (forcing the internet lookup for metadata)CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1025.5): [NEW] Support for connecting to CRM Online via Office 365 (OSDP) [NEW] Current connection information and loaded ribbon name are displayed in the status bar [IMPROVED] Connect dialog minor improvements and error message descriptions [IMPROVED] Connecting to a CRM server will close currently loaded ribbon upon confirmation (if another ribbon was loaded previously) [FIX] Fixed bug in Open Ribbon dialog which would not allow to refresh entity list more than onceReadable Passphrase Generator: KeePass Plugin 0.8.0: Changes: Interrogative phrases (questions) like why did the statesman burgle amidst lucid sunlamps Support transitive / intransitive verbs (whether a verb needs a subject or not). Change adverbs to be either before or after the verb, at random. Add an "equal" version of each strength, where each possibility is equally likely (for password purists). 3401 words in the default dictionary (~400 more than previous release) Fixed bugs when choosing verb tensesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.72: Fix for Issue #18819 - bad optimization of return/assign operator.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.390: Version 2.5.0.390 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Fix recent file list remove issue. WAF: Minor code improvements. BookLibrary: Fix Blend design time support o...Fiskalizacija za developere: FiskalizacijaDev 1.1: Ovo je prva nadogradnja ovog projekta nakon inicijalnog predstavljanja - dodali smo nekoliko feature-a, bilo zato što smo sami primijetili da bi ih bilo dobro dodati, bilo na osnovu vaših sugestija - hvala svima koji su se ukljucili :) Ovo su stvari riješene u v1.1.: 1. Bilo bi dobro da se XML dokument koji se šalje u CIS može snimiti u datoteku (http://fiskalizacija.codeplex.com/workitem/612) 2. Podrška za COM DLL (VB6) (http://fiskalizacija.codeplex.com/workitem/613) 3. Podrška za DOS (unu...Liberty: v3.4.0.0 Release 20th October 2012: Change Log -Added -Halo 4 support (invincibility, ammo editing) -Reach A warning dialog now shows up when you first attempt to swap a weapon -Fixed -A few minor bugsNew Projects25minutes the simpliest, best looking, hassle free Pomodoro Technique® timer: 25 minutes __________________________________________________ it is the simplest, best looking, hassle free timer for all Pomodoro Technique® fans out there.Argument: A website for constructing logical arguments in tree form.Asset manager: More information comming soon!bootster: bootster is a bootstrapper for small/medium sized .net web projects.BugHerd-4-DNN: This DotNetNuke(TM) extension simplifies the integration of BugHerd on your DNN portals.CaptureLinks 2.0: CaptureLink2.0 with SharePoint 2013 SupportDeploy File Demo: Deployfile Demo is the companion piece to Deploy File Generator. It shows how to deploy files that have been imported into a special '.resources' file.DIfmClient: DIfmClient is a WPF (Windows Presentation Foundation) DI.FM streaming radio player, written in C#.FinApps201240: Aplicación Financiera para La CaixaFirstTasteKudo: Just a taste of Kudo feature.Imgx: This is specifically for internal use only.jamestest123: Blah blah blahMezanmiTechFireMyTeam: In short we can use this project for any rating systems.MezanmiTechInTouchReminder: just a way to contact peopleMoniMisiDemo: Para el desarrollo de aplicaciones web en ASP.net del tipo SPI (Single Page Interface).Orchard Redirect404: This project allows you to configure redirects for 404 errors via the Orchard admin interface. P/Opus (.NET Wrapper for libopus): P/Opus is a .NET library written in C# to wrap around the libopus C API/library to provide a more .NET friendly way of encoding and decoding Opus packets.Pengaturan Sambungan (Connection Setting): Aplikasi untuk menyimpan connection string secara terpisah dari program utamaPwdManagement: mgrrezaTest: TestSnowflake Id Generator: Snowflake is a network service for generating unique ID numbers at high scale with some simple guarantees.The Game for Microsoft Dynamics CRM2011: A solution containing a framework for Microsoft Dynamics CRM2011 to enable gamification of the product in order to drive user adoption and business objectives.UCMA 4.0 Async Extension Methods: This collection of extension methods makes it easy for developers to use the async/await pattern for multithreaded development with UCMA 4.0.ValtechGitTfs: Sandbox project to test git-tfs with a TFS serverWarriorG: PokerWeb site bán thú cung: [TRY]XVIB 360: XBOX 360???????????????????????。

    Read the article

  • first install for windows eight.....da beta

    - by raysmithequip
    The W8 preview is now installed and I am enjoying it.  I remember the learning curve of my first unix machine back in the eighties, this ain't that.It is normal for me to do the first os install with a keyboard and low end monitor...you never know what you'll encounter out in the field.  The OS took like a fish to water.  I used a low end INTEL motherboard dp55w I gathered on the cheap, an 1157 i5 from the used bin a pair of 6 gig ddr3 sticks, a rosewell 550 watt power supply a cheap used twenty buck sub 200g wd sata drive, a half working dvd burner and an asus fanless nvidia vid card, not a great one but Sub 50.00 on newey eggey...I did have to hunt the ms forums for a key and of course to activate the thing, if dos would of needed this outmoded ritual, we would still be on cpm and osborne would be a household name, of course little do people know that this ritual was common as far back as the seventies on att unix installs....not, but it was possible, I used to joke about when I ran a bbs, what hell would of been wrought had dos 3.2 machines been required to dial into my bbs to send fido mail to ms and wait for an acknowledgement.  All in all the thing was pushing a seven on the ms richter scale, not including the vid card, sadly it came in at just a tad over three....I wanted to evaluate it for a possible replacement on critical machines that in the past went down due to a vid card fan failure....you have no idea what a customer thinks when you show them a failed vid card fan..."you mean that little plastic piece of junk caused all this!!??!!!"...yea man.  Some production machines don't need any sort of vid, I will at least keep it on the maybe list for those, MTBF is a very important factor, some big box stores should put percentage of failure rate within 24 month estimates on the outside of the carton for sure.  And a warning that the power supplies are already at their limit.  Let's face it, today even 550w can be iffy.A few neat eye candy improvements over the earlier windows is nice, the metro screen is nice, anyone who has used a newer phone recently will intuitively drag their fingers across the screen....lot of good that was with no mouse or touch screen though.  Lucky me, I have been using windows since day one, I still have a copy of win 2.0 (and every other version) for no good reason.  Still the old ix collection of disks is much larger, recompiling any kernal is another silly ritual, same machine, different day, same recompile...argh. Rh is my all time fav, mandrake was always missing something, like it rewrote the init file or something, novell is ok as long as you stay on the beaten path and of course ubuntu normally recompiles with the same errors consistantly....makes life easy that way....no errors on windows eight, just a screen that did not match the installed hardware, natuarally I alt tabbed right out of it, then hit the flag key to find the start menu....no start button. I miss the start button already. Keyboard cowboy funnin and I was browsing the harddrive, nothing stunning there, I like that, means I can find stuff. Only I can't find what I want, the start button....the start menu is that first screen for touch tablets. No biggie for useruser, that is where they will want to be, I can see that. Admins won't want to be there, it is easy enough to get the control panel a bazzilion other ways though, just not the start button. (see a pattern here?). Personally, from the keyboard I find it fun to hit the carets along the location bar at the top of the explorer screen with tabs and arrows and choose SHOW ALL CONTROL PANEL ITEMS, or thereabouts. Bottom line, I love seven and I'll love eight even more!...very happy I did not have to follow the normal rule of thumb (a customer watching me build a system and asking questions said "oh I get it, so every piece you put in there is basically a hundred bucks, right?)...ok, sure, pretty much, more or less, well, ya dude.  It will be WAY past october till I get a real touch screen but I did pick up a pair of cheap tatungs so I can try the NEW main start screen, I parse a lot of folders and have a vision of how a pair of touch screens will be easier than landing a rover on mars.  Ok.  fine, they are way smallish, and I don't expect multitouch to work but we are talking a few percent of a new 21 inch viewsonic touch screen.  Will this OS be a game changer?  I don't know.  Bottom line with all the pads and droids in the world, it is more of a catch up move at first glance.  Not something ms is used to.  An app store?  I can see ms's motivation, the others have it.  I gather there will not be gadgets there, go ahead and see what ms did  to the once populated gadget page...go ahead, google gadgets and take a gander, used to hundreds of gadgets, they are already gone.  They replaced gadgets?  sort of, I'll drop that, it's a bit of a sore point for me.  More of interest was what happened when I downloaded stuff off codeplex and some other normal programs that I like, like orbitron, top o' my list!!...cardware it is...anyways, click on the exe, get a screen, normal for windows, this one indicated that I was not running a normal windows program and had a button for  exit the install, naw, I hit details, a hidden run program anyways came into view....great, my path to the normal windows has detected a program tha.....yea ok, acl is on, fine, moving along I got orbitron installed in record time and was tracking the iss on the newest Microsoft OS, beta of course, felt like the first time I setup bsd all those year ago...FUN!!...I suppose I gotta start to think about budgeting for the real os when it comes out in october, by then I should have a rasberry pi and be done with fedora remixed.  Of course that sounds like fun too!!  I would use this OS on a tablet or phone.  I don't like the idea of being hearded to an app store, don't like that on anything, we are americans and want real choices not marketed hype, lest you are younger with opm (other peoples money).   This os would be neat on a zune, but I suspect the zune is a gonner, I am rooting for microsoft, after all their default password is not admin anymore, nor alpine,  it's blank. Others force a password, my first fawn password was so long I could not even log into it with the password in front of me, who the heck uses %$# anyways, and if I was writing a brute force attack what the heck kinda impasse is that anyways at .00001 microseconds of a code execution cycle (just a non qualified number, not a real clock speed)....AI is where it will be before too long, MS is on that path, perhaps soon someone will sit down and write an app for the kinect that watches your eyes while you scan the new main start screen, clicking on the big E icon when you blink.....boy is that going to be fun!!!! sure. Blink,dammit,blink,dammit...... OPM no doubt.I like windows eight, we are moving forwards, better keep a close eye on ubuntu.  The real clinch comes when open source becomes paid source......don't blink, I already see plenty of very expensive 'ix apps, some even in app stores already.  more to come.......

    Read the article

  • Using wget to download pdf files from a site that requires cookies to be set

    - by matt74tm
    I want to access a newspaper site and then download their epaper copies (in PDF). The site requires me to login using my email address and password and then it permits me to access those PDF URLs. I'm having trouble 'setting my session' in wget. When I login into the site from my browser, it sets two cookie values: [email protected] Password=12345 I tried: wget --post-data "[email protected]&Password=12345" http://epaper.abc.com/login.aspx However, that just downloaded the login page and saved it locally The FORM on the login page has two fields: txtUserID txtPassword and radiobuttons like this: <input id="rbtnManchester" type="radio" checked="checked" name="txtpub" value="44"> Another button: <input id="rbtnLondon" type="radio" name="txtpub" value="64"> If I post this to the login.aspx page, I get the same output wget --post-data "[email protected]&txtPassword=12345&txtpub=44" http://epaper.abc.com/login.aspx If I do: --save-cookies abc_cookies.txt it doesnt seem to have anything other than the default content. For the last if I do --debug as well it says: ... Set-Cookie: ASP.NET_SessionId=05kphcn4hjmblq45qgnjoe41; path=/; HttpOnly ... Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId 05kphcn4hjmblq45qgnjoe41 Length: 107253 (105K) [text/html] Saving to: `login.aspx' ... Saving cookies to abc_cookies.txt. However, abc_cookies.txt shows ONLY the following: # HTTP cookie file. # Generated by Wget on 2011-08-16 08:03:05. # Edit at your own risk. (Not sure why I'm not getting any responses on SO - perhaps SU is a better forum - http://stackoverflow.com/questions/7064171/using-wget-to-download-pdf-files-from-a-site-that-requires-cookies-to-be-set) EDIT 1 C:\Temp>wget --cookies=on --keep-session-cookies --save-cookies abc_cookies.txt --post-data "txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7" http://epaper.abc.com/login.aspx --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. --2011-08-18 08:15:59-- http://epaper.abc.com/login.aspx Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00a2ae80 (new refcount 1). ---request begin--- POST /login.aspx HTTP/1.0 User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 100 ---request end--- [POST data: txtUserID=abc%40gmail.com&txtPassword=password&txtpub=44&chkbox=checkbox&submit.x=48&submit.y=7] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:17 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Set-Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145; path=/; HttpOnly Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 107253 ---response end--- 200 OK Registered socket 300 for persistent reuse. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 Length: 107253 (105K) [text/html] Saving to: `login.aspx.1' 100%[======================================================================================================================>] 107,253 24.9K/s in 4.2s 2011-08-18 08:16:05 (24.9 KB/s) - `login.aspx.1' saved [107253/107253] Saving cookies to abc_cookies.txt. Done saving cookies. C:\Temp>wget --referer=http://epaper.abc.com/login.aspx --cookies=on --load-cookies abc_cookies.txt --keep-session-cookies --save-cookies abc_cookies.txt http://epaper.abc.com/PagePrint/16_08_2011_001.pdf --debug SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files (x86)\GnuWin32/etc/wgetrc DEBUG output created by Wget 1.11.4 on Windows-MinGW. Stored cookie epaper.abc.com -1 (ANY) / <session> <insecure> [expiry none] ASP.NET_SessionId owcrje55yl45kgmhn43gq145 --2011-08-18 08:16:12-- http://epaper.abc.com/PagePrint/16_08_2011_001.pdf Resolving epaper.abc.com... seconds 0.00, 999.999.99.99 Caching epaper.abc.com => 999.999.99.99 Connecting to epaper.abc.com|999.999.99.99|:80... seconds 0.00, connected. Created socket 300. Releasing 0x00598290 (new refcount 1). ---request begin--- GET /PagePrint/16_08_2011_001.pdf HTTP/1.0 Referer: http://epaper.abc.com/login.aspx User-Agent: Wget/1.11.4 Accept: */* Host: epaper.abc.com Connection: Keep-Alive Cookie: ASP.NET_SessionId=owcrje55yl45kgmhn43gq145 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Connection: keep-alive Date: Thu, 18 Aug 2011 02:46:30 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 content-disposition: attachement; filename=Default_logo.gif Cache-Control: private Content-Type: image/GIF Content-Length: 4568 ---response end--- 200 OK Registered socket 300 for persistent reuse. Length: 4568 (4.5K) [image/GIF] Saving to: `16_08_2011_001.pdf' 100%[======================================================================================================================>] 4,568 7.74K/s in 0.6s 2011-08-18 08:16:14 (7.74 KB/s) - `16_08_2011_001.pdf' saved [4568/4568] Saving cookies to abc_cookies.txt. Done saving cookies. Contents of abc_cookies.txt epaper.abc.com FALSE / FALSE 0 ASP.NET_SessionId owcrje55yl45kgmhn43gq145

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Presentation Isn't Over Until It's Over

    - by Phil Factor
    The senior corporate dignitaries settled into their seats looking important in a blue-suited sort of way. The lights dimmed as I strode out in front to give my presentation.  I had ten vital minutes to make my pitch.  I was about to dazzle the top management of a large software company who were considering the purchase of my software product. I would present them with a dazzling synthesis of diagrams, graphs, followed by  a live demonstration of my software projected from my laptop.  My preparation had been meticulous: It had to be: A year’s hard work was at stake, so I’d prepared it to perfection.  I stood up and took them all in, with a gaze of sublime confidence. Then the laptop expired. There are several possible alternative plans of action when this happens     A. Stare at the smoking laptop vacuously, flapping ones mouth slowly up and down     B. Stand frozen like a statue, locked in indecision between fright and flight.     C. Run out of the room, weeping     D. Pretend that this was all planned     E. Abandon the presentation in favour of a stilted and tedious dissertation about the software     F. Shake your fist at the sky, and curse the sense of humour of your preferred deity I started for a few seconds on plan B, normally referred to as the ‘Rabbit in the headlamps of the car’ technique. Suddenly, a little voice inside my head spoke. It spoke the famous inane words of Yogi Berra; ‘The game isn't over until it's over.’ ‘Too right’, I thought. What to do? I ran through the alternatives A-F inclusive in my mind but none appealed to me. I was completely unprepared for this. Nowadays, longevity has since taught me more than I wanted to know about the wacky sense of humour of fate, and I would have taken two laptops. I hadn’t, but decided to do the presentation anyway as planned. I started out ignoring the dead laptop, but pretending, instead that it was still working. The audience looked startled. They were expecting plan B to be succeeded by plan C, I suspect. They weren’t used to denial on this scale. After my introductory talk, which didn’t require any visuals, I came to the diagram that described the application I’d written.  I’d taken ages over it and it was hot stuff. Well, it would have been had it been projected onto the screen. It wasn’t. Before I describe what happened then, I must explain that I have thespian tendencies.  My  triumph as Professor Higgins in My Fair Lady at the local operatic society is now long forgotten, but I remember at the time of my finest performance, the moment that, glancing up over the vast audience of  moist-eyed faces at the during the poignant  scene between Eliza and Higgins at the end, I  realised that I had a talent that one day could possibly  be harnessed for commercial use I just talked about the diagram as if it was there, but throwing in some extra description. The audience nodded helpfully when I’d done enough. Emboldened, I began a sort of mime, well, more of a ballet, to represent each slide as I came to it. Heaven knows I’d done my preparation and, in my mind’s eye, I could see every detail, but I had to somehow project the reality of that vision to the audience, much the same way any actor playing Macbeth should do the ghost of Banquo.  My desperation gave me a manic energy. If you’ve ever demonstrated a windows application entirely by mime, gesture and florid description, you’ll understand the scale of the challenge, but then I had nothing to lose. With a brief sentence of description here and there, and arms flailing whilst outlining the size and shape of  graphs and diagrams, I used the many tricks of mime, gesture and body-language  learned from playing Captain Hook, or the Sheriff of Nottingham in pantomime. I set out determinedly on my desperate venture. There wasn’t time to do anything but focus on the challenge of the task: the world around me narrowed down to ten faces and my presentation: ten souls who had to be hypnotized into seeing a Windows application:  one that was slick, well organized and functional I don’t remember the details. Eight minutes of my life are gone completely. I was a thespian berserker.  I know however that I followed the basic plan of building the presentation in a carefully controlled crescendo until the dazzling finale where the results were displayed on-screen.  ‘And here you see the results, neatly formatted and grouped carefully to enhance the significance of the figures, together with running trend-graphs!’ I waved a mime to signify an animated  window-opening, and looked up, in my first pause, to gaze defiantly  at the audience.  It was a sight I’ll never forget. Ten pairs of eyes were gazing in rapt attention at the imaginary window, and several pairs of eyes were glancing at the imaginary graphs and figures.  I hadn’t had an audience like that since my starring role in  Beauty and the Beast.  At that moment, I realized that my desperate ploy might work. I sat down, slightly winded, when my ten minutes were up.  For the first and last time in my life, the audience of a  ‘PowerPoint’ presentation burst into spontaneous applause. ‘Any questions?’ ‘Yes,  Have you got an agent?’ Yes, in case you’re wondering, I got the deal. They bought the software product from me there and then. However, it was a life-changing experience for me and I have never ever again trusted technology as part of a presentation.  Even if things can’t go wrong, they’ll go wrong and they’ll kill the flow of what you’re presenting.  if you can’t do something without the techno-props, then you shouldn’t do it.  The greatest lesson of all is that great presentations require preparation and  ‘stage-presence’ rather than fancy graphics. They’re a great supporting aid, but they should never dominate to the point that you’re lost without them.

    Read the article

  • CodePlex Daily Summary for Thursday, June 07, 2012

    CodePlex Daily Summary for Thursday, June 07, 2012Popular Releases????SDK for .Net 2.0/3.5/4.0 (OAuth2.0+??V2?API): ??VS2008?.net2.0、3.5、4.0????????: ??Upload???“?????IComparer”??????,????????。 ???????.net????VS2010??,?????????。 ????VS2008?.net2.0/3.5?!??Entities???? ???????.net???N????? ??JSON.net??????????? ?.net4.0??API?????dynamic??class ???alpha??,???? ?????????????JSON.net????,??????。 VS2005???????,?????var???,??????????! ????.net4.0???????????????,?????????LINQ to Twitter: LINQ to Twitter Beta v2.0.26: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, Client Profile, and Windows 8. 100% Twitter API coverage. Also available via NuGet! Follow @JoeMayo.Python Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...32feet.NET: 3.5: This version changes the 32feet.NET library (both desktop and NETCF) to use .NET Framework version 3.5. Previously we compiled for .NET v2.0. There are no code changes from our version 3.4. See the 3.4 release for more information. Changes due to compiling for .NET 3.5Applications should be changed to use NET/NETCF v3.5. Removal of class InTheHand.Net.Bluetooth.AsyncCompletedEventArgs, which we provided on NETCF. We now just use the standard .NET System.ComponentModel.AsyncCompletedEvent...DotNetNuke® Links: 06.02.01: Added new DNN 6.2.0 beta social feature "friends" BugfixesApplication Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...xNet: xNet 2.1.1: Release xNet 2.1.1Command Line Parser Library: 1.9.2.4 stable: This is the first stable of 1.9.* branch. Added tests for HelpText::AutoBuild. Fixed minor formatting error in HelpText::DefaultParsingErrorsHandler.myManga: myManga v1.0.0.4: ChangeLogUpdating from Previous Version: Extract contents of Release - myManga v1.0.0.4.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllMVVM Light Toolkit: V4RC (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RC. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibsPreviewExtAspNet: ExtAspNet v3.1.7: +2012-06-03 v3.1.7 -?????????BUG,??????RadioButtonList?,AJAX????????BUG(swtseaman、????)。 +?Grid?BoundField、HyperLinkField、LinkButtonField、WindowField??HtmlEncode?HtmlEncodeFormatString(TiDi)。 -HtmlEncode?HtmlEncodeFormatString??????true,??????HTML????????。 -??????Asp.Net??GridView?BoundField?????????。 -http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.boundfield.htmlencode -?Grid?HyperLinkField、WindowField??UrlEncode??,????URL??(???true)。 -?????????????,?????????????...LiveChat Starter Kit: LCSK v1.5.2: New features: Visitor location (City - Country) from geo-location Pass configuration via javascript for the chat box New visitor identification (no more using the IP address as visitor identification) To update from 1.5.1 Run the /src/1.5.2-sql-updates.txt SQL script to update your database tables. If you have it installed via NuGet, simply update your package and the file will be included so you can run the update script. New installation The easiest way to add LCSK to your app is by...Kendo UI ASP.NET Sample Applications: Sample Applications (2012-06-01): Sample application(s) demonstrating the use of Kendo UI in ASP.NET applications.Better Explorer: Better Explorer Beta 1: Finally, the first Beta is here! There were a lot of changes, including: Translations into 10 different languages (the translations are not complete and will be updated soon) Conditional Select new tools for managing archives Folder Tools tab new search bar and Search Tab new image editing tools update function many bug fixes, stability fixes, and memory leak fixes other new features as well! Please check it out and if there are any problems, let us know. :) Also, do not for...New ProjectsActiveAttributes: Create attributes that execute code when their target members are called.Blogger Access Library: This is a .NET library that make it easy to post your blog article to Blogger when you want to handle the blogger with .NET (C#, VB, F#...etc) code.cookie.js - Simple JavaScript Cookie Processor: Simple JavaScript Cookie ProcessorDeberPrueba: deber de tareaDNN User Redirect: Allows you to redirect users that are members of specific roles or groups to landing pages within your DotNetNuke website. This is perfect for scenarios such as the following: - redirect unauthenticated users to a Login page - redirect authenticated users to a Welcome page after successful login - redirect users to a page where they need to view/accept Terms of Service - redirect employees who are part of a specific department in your organization to a Departmental landing page When ...Evento-Pro: O Aplicativo contará com uma área de manutenção e cadastro das informações necessárias para seu funcionamento, bem como uma tela para visualização, criação/manutenção dos eventos.Exchange Mailbox Permission Reverse Lookup: Ever wondered a user in which mailboxes has full-access or send-as rights? One common strategy is to use groups to grant permissions on shared mailboxes, where querying the user which groups is member of would do the job. But in case you grant mailbox permissions directly to users (maybe because you are using the Exchange 2010 shared mailbox automapping feature), this tool can come quite handy.Find and Replace word in the sentences: This program used Java Development Kid 6.0 and i were using HighLighter class. It was completed code with source code and then everybody can use in everything. I use it for my assignment of NCC Education on IAD(International Advanced Diploma). GIMS: Graphical ImageMagick ScripterGoogleMaps .NET API: This is a wrapper to use the Google Maps API in a .NET windows application (WinForms and WPF). It works by using a browser control (either WinForms or WPF), and interacting with a JavaScript implementation of Google Maps.Grid.Mvc: Grid.Mvc - is a component that allows you easy construction of HTML tables for displaying, paging and sorting data from a collection of Model objects.HgReleaseNotesGen: A cmd line utility for automatically creating a Release Notes document from a Mercurial repository - currently used by StyleCop.HS FB: Fizz BuzzJaySvcUtil - generate JavaScript context from OData metadata: This tool generates client-side metadata for OData service endpoints, so OData services can be consumed from JavaScript using JayData. Visit http://jaydata.org for detailed documentation, example apps and tutorials. You can download JayData from the [url:JayData CodePlex project|http://jaydata.codeplex.com] Knockout Serializer: Knockout SerializerMarsExplorer: This is a personal projectMulti camera snapshot taker: **this c# winform project is based on DXSnap-2008 sample project ** A few days ago, someone told me that it was not possible to take 3 snapshots , from 3 different webcams , at the same time (e.g. this guy wanted to take snapshots of an item on 3 different axes (X,Y,Z) at the same time. so I 've tried to make it work..mvcEticaret: Ticaret sitesiPush Notification: Push Notification service sample using F# and Duplex NetTcpBinding Reproductor: ReproductorStudyMate: A Internet and mobile based website, where user can make and share Exam revision notes to read them from mobile, attempt objective type questions and chat between online users.SugarSync Folder Provider for DotNetNuke: The SugarSync Folder Provider for DotNetNuke allows you to have direct integration between your cloud-hosted files and the file system in your DotNetNuke website. In using this extension, you will be able to enjoy the management of files in a CMS with the power of cloud file hosting.The SharePoint 2010 Tag Cloud web part for blog web template: The SharePoint 2010 Tag Cloud web part for blog web templateTimeClearWinFreeTime: ?????????IT???????,????????????,????????????(??:?????,??????)。???????????????????????,???????????????,??????,????????,?????55?????Windows????,????????????。 ???????c#,Windows7 ???Windows???????????, Windows XP??????.Netframework3.5????. Netframework3.5????: http://msdn.microsoft.com/zh-cn/netframework/cc378097VoIP based Call Management System (VBCMS): This project intends to focus on new ways of providing road side assistance service and many more using VoIP based systemXaml FlowDocument to PDF Converter: Simple FlowDocument to PDF Converter with paging, header and footer

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • Writing custom Message Formatter for SOAP basicHttpBinding

    - by Lijo
    I have a WSDL published by our service development team. It is using SOAP and basicHttpBinding. I can add the service reference to the project using Add Service Reference option in Visual Studio. I need to develop the WCF client. I need to use custom Message Formatter (for mapping between Messages and CLR types). Can you please show how to write the custom Message Formatter (in C# )for the following wsdl? Note: I am planning to use custom Message Formatter due to an issue mentioned in http://stackoverflow.com/questions/12316884/header-namespace-mismatch-issue WSDL <definitions xmlns:import0="urn:thinktecture-com:demos:restaurantservice:data:v1" xmlns:import2="urn:thinktecture-com:demos:restaurantservice:messages:v1" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:import1="urn:thinktecture-com:demos:restaurantservice:headerdata:v1" xmlns:tns="urn:thinktecture-com:demos:restaurantservice:wsdl:v1" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" name="RestauarntService" targetNamespace="urn:thinktecture-com:demos:restaurantservice:wsdl:v1" xmlns="http://schemas.xmlsoap.org/wsdl/"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <types> <xsd:schema> <xsd:import schemaLocation="RestaurantData.xsd" namespace="urn:thinktecture-com:demos:restaurantservice:data:v1" /> <xsd:import schemaLocation="RestaurantHeaderData.xsd" namespace="urn:thinktecture-com:demos:restaurantservice:headerdata:v1" /> <xsd:import schemaLocation="RestaurantMessages.xsd" namespace="urn:thinktecture-com:demos:restaurantservice:messages:v1" /> </xsd:schema> </types> <message name="getRestaurantsIn"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <part name="parameters" element="import2:getRestaurants" /> </message> <message name="getRestaurantsOut"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <part name="parameters" element="import2:getRestaurantsResponse" /> </message> <message name="lijosCustomFaultMessage"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <part name="fault" element="import2:customFault" /> </message> <message name="userCredentialsIn"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <part name="parameters" element="import1:userCredentials" /> </message> <message name="addRestaurantIn"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <part name="parameters" element="import2:addRestaurant" /> </message> <message name="addRestaurantInHeader1"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <part name="parameters" element="import1:userCredentials" /> </message> <message name="customFaultIn"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <part name="parameters" element="import2:customFault" /> </message> <portType name="RestauarntServiceInterface"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <operation name="getRestaurants"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <input message="tns:getRestaurantsIn" /> <output message="tns:getRestaurantsOut" /> <fault name="lijosCustomFaultMessage" message="tns:lijosCustomFaultMessage" /> </operation> <operation name="userCredentials"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <input message="tns:userCredentialsIn" /> </operation> <operation name="addRestaurant"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <input message="tns:addRestaurantIn" /> </operation> <operation name="customFault"> <wsdl:documentation xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" /> <input message="tns:customFaultIn" /> </operation> </portType> <binding name="BasicHttpBinding_RestauarntServiceInterface" type="tns:RestauarntServiceInterface"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" /> <operation name="getRestaurants"> <soap:operation soapAction="urn:thinktecture-com:demos:restaurantservice:wsdl:v1:getRestaurantsIn" style="document" /> <input> <soap:body use="literal" /> </input> <output> <soap:body use="literal" /> </output> <fault name="lijosCustomFaultMessage"> <soap:fault use="literal" name="lijosCustomFaultMessage" namespace="" /> </fault> </operation> <operation name="userCredentials"> <soap:operation soapAction="urn:thinktecture-com:demos:restaurantservice:wsdl:v1:userCredentialsIn" style="document" /> <input> <soap:body use="literal" /> </input> </operation> <operation name="addRestaurant"> <soap:operation soapAction="urn:thinktecture-com:demos:restaurantservice:wsdl:v1:addRestaurantIn" style="document" /> <input> <soap:body use="literal" /> <soap:header message="tns:addRestaurantInHeader1" part="parameters" use="literal" /> </input> </operation> <operation name="customFault"> <soap:operation soapAction="urn:thinktecture-com:demos:restaurantservice:wsdl:v1:customFaultIn" style="document" /> <input> <soap:body use="literal" /> </input> </operation> </binding> <service name="RestauarntServicePort"> <port name="RestauarntServicePort" binding="tns:BasicHttpBinding_RestauarntServiceInterface"> <soap:address location="http://localhost/RestauarntService" /> </port> </service> </definitions>?? RestaurantData.xsd <?xml version="1.0" encoding="utf-8" ?> <xs:schema id="RestaurantData" targetNamespace="urn:thinktecture-com:demos:restaurantservice:data:v1" elementFormDefault="qualified" xmlns="urn:thinktecture-com:demos:restaurantservice:data:v1" xmlns:mstns="urn:thinktecture-com:demos:restaurantservice:data:v1" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:complexType name="restaurantInfo"> <xs:sequence> <xs:element name="restaurantID" type="xs:int" /> <xs:element name="name" type="xs:string" /> <xs:element name="address" type="xs:string" /> <xs:element name="city" type="xs:string" /> <xs:element name="state" type="xs:string" /> <xs:element name="zip" type="xs:string" /> <xs:element name="openFrom" type="xs:time" /> <xs:element name="openTo" type="xs:time" /> </xs:sequence> </xs:complexType> <xs:complexType name="restaurantsList"> <xs:sequence> <xs:element name="restaurant" type="restaurantInfo" maxOccurs="unbounded" minOccurs="0" /> </xs:sequence> </xs:complexType> <xs:complexType name="customFault"> <xs:sequence> <xs:element name="errorCode" type="xs:string"/> <xs:element name="message" type="xs:string"/> <xs:element maxOccurs="unbounded" minOccurs="0" name="messages" type="xs:string" /> </xs:sequence> </xs:complexType> </xs:schema> RestaurantHeaderData.xsd <?xml version="1.0" encoding="utf-8" ?> <xs:schema id="RestaurantHeaderData" targetNamespace="urn:thinktecture-com:demos:restaurantservice:headerdata:v1" elementFormDefault="qualified" xmlns="urn:thinktecture-com:demos:restaurantservice:headerdata:v1" xmlns:mstns="urn:thinktecture-com:demos:restaurantservice:headerdata:v1" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:complexType name="credentials"> <xs:sequence> <xs:element name="username" type="xs:string" /> <xs:element name="password" type="xs:string" /> </xs:sequence> </xs:complexType> <xs:element name="userCredentials" type="credentials"> </xs:element> </xs:schema> ? RestaurantMessages.xsd <?xml version="1.0" encoding="utf-8" ?> <xs:schema id="RestaurantMessages" targetNamespace="urn:thinktecture-com:demos:restaurantservice:messages:v1" elementFormDefault="qualified" xmlns="urn:thinktecture-com:demos:restaurantservice:messages:v1" xmlns:mstns="urn:thinktecture-com:demos:restaurantservice:messages:v1" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:import="urn:thinktecture-com:demos:restaurantservice:data:v1"> <xs:import id="RestaurantData" schemaLocation="RestaurantData.xsd" namespace="urn:thinktecture-com:demos:restaurantservice:data:v1"> </xs:import> <xs:element name="getRestaurants"> <xs:complexType> <xs:sequence> <xs:element name="zip" type="xs:string" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="getRestaurantsResponse"> <xs:complexType> <xs:sequence> <xs:element name="restaurants" type="import:restaurantsList" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="addRestaurant"> <xs:complexType> <xs:sequence> <xs:element name="restaurant" type="import:restaurantInfo" /> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="customFault" type="import:customFault" /> </xs:schema>

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Need personal advice on how to get out of a company..

    - by SOfan
    Hi, I am an SO user since past 6 months and this is the first time I am turning to SO for personal help. I have asked technical questions before with my real ID. I am stuck inside a service based IT company for the past one year and haven't been able to decide if to leave it, when to leave it and how to leave it. I had taken 2 weeks LWP on medical reason roughly at end of 1 year and then soon after reporting, I applied for 2 months more LWP (on medical/personal ground) with the intention of working on my health,take up a hobby class to ward off depression,pessimism, to have some fun in life, and to look for a job which I really would be excited about - that interests me and which matches with my strength. My leave starts from this Monday. So in any case, I had hard set in mind that I will leave the company after I join them back hopefully with some job offer already in hand (after figuring out what I want do). Neither I can stand the past project,past colleagues,company, HR, pathetically low salary. But if I really listen to my heart, I don't want to have to go back to that office after my sabbatical and again have to see those people. I will have to resign it after my sabbatical ends. Then HR people perhaps wont like it, may even accuse me on face or behind back that primary purpose of my leave must have been to hunt for a better job and I lied about medical and person reasons. Also, if they get nasty and force me to serve 2 months notice period. There is no way I see myself after sabbatical resuming in old project or starting new work. It will be a pain. Since they have already approved 2 months leave and stuff, ideally if they want, they should be just able to relieve me right on the next day after I join back. But, I don't know if they want to get nasty, will they mention about my 2 months sabbatical leave in my experience letter or more scary, the term medical/personal reason. I have hard earned my experience here, have worked against my will, mostly it has been painful and slogged like anything, because I realize the importance of work experience in IT industry. I don't have greed to have those 2 months included extra in my experience letter, but I don't want to mess up with my experience letter in a way which makes my next employer ask question, get suspicious, or be wary if I have any medical reason going on. Being an emotional,moody person or somebody who can't be in an environment, once my mind and heart starts hating it. I think it perhaps is best, if I resign on Monday itself telling them (in polite manner) something that look I took sabbatical for some reason but I don't want to resume working in the company after my sabbatical ends. So please accept my resignation. Now tell me what you want to do about my leave request, my notice period and when you are willing to relieve me. What should I write and how? Some background: I am working in an IT company in India.I am overqualified in the company. It is grossly underpaying me. My education qualifications far exceed anyone's in the whole company being a CS undergrad as well as a CS grad. I joined this company after finishing the grad. I had self-doubts about my skills and interest as a programmer. I like doing research oriented work, though didn't have any particular success during grad. My life here has been very hectic. The project containing many many sub-projects has kept me on my toes and I have never really liked the work. I have been playing against my strength. Also the company strict internet usage policy (you can't read gmails, can't browse any non-work related sites not even news). When working for a client, from the machine we can't even check company related emails.For this one has to go to kiosk like 5 machines in a small room etc. Most of the times those machines are not available, so it was not unusual to keep making rounds to these kiosk machines to check company emails, browse company related emails etc.So it was not so easy to keep in touch with company related basic affairs for a not particular careful person. Things like this which are new to me, make me feel restricted. I am an undecisive person with a sense of failure, self-doubt, not meeting up unrealistic expectation. Somewhere at back of mind, I envy my classmates who make a smooth transition from company to company without causing any gap in their resume. I on other hand have gaps in resume. I get tired after working in a place for sometime. problem with colleagues in general. I am not particular great with people, have few friends, not known for a fun nature, rather serious, scholar. I am not a typical conventional female. I think females are usually more disciplined. But I am not so. I reach office late (though after informing manager). I don't want to blame them entirely, because from my past, it is not unusual for me to get undecisive on things. Also I had doubts about my ability as researched and to succeed there. of building a relationship in a group, to have something to talk about, newspaper. I get cut-off from people. peer pressure. I make blunders in coding, lose patience. Consciously or unconsciously I feel contempt for people here, work here, environment here. I have doubts that either I go to a place which does innovation, does research oriented work, product biggies, have great motivated people, have competent people passionate about products they are building. But then I also doubt my ability to survive there. I have identified that an idea job for me would be 4 days a week, a high salary job. When among people in company/team, I can't think much. I need some time at home to read good authentic books written in good style on what work I am doing.So that I am comfortable with my understanding of work. I get into pressure easily under deadline and need 5th day to cool myself off. I took for 2 weeks leave, because each day was hell for me. May be the depression phase of bipolar is on and also partially it could be that being a work centered person, who derives happiness,self-esteem from work, haven't been enjoying work and have been working for the sole person of proving stability, and ability to stick, against all odds, and facing what challenges I see, bonding with people, identifying opportunities to learn in given task etc.have been averaging one day LWP in 1 week or 10 days. or may be because of my nature,ADD,not being able to switch context,out of touch with news, don't have a circle of friends with who I enjoy. less knowledge in general to talk about, just some technical stuff.anyway, so due to emotional reason, some practical reason etc, I wanted to be very sure before leaving. So my leave starts from Monday and I should feel happy about it. I have taken the leave to for a few purposes - to take care of my health by regular yoga/exercise (with project on, I just can't do anything regular), reassess myself to see what I want to try next which work I might like, look for next job, take up a hobby which I like say singing. I am not clear on my career,job aspiration. I have tried my hands on research. During this year appraisal yesterday, I even had some conflict with my last manager. In meeting with me one on one, he would say all nice things about me, but in feedback to new manager, he hasn't given any excellent feedback. It is all only good. I am angry at this old Manager. Also new manager also scolded me as I didn't agree to his appraisal and waited to hear myself from old Manager. He kind of scolded me for wasting his time. Am I being unethical somewhere? I am always very conscious of if I am cheating anywhere. What advice I am seeking? How to resign and what to write in resignation letter

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • TGIF: Engagement Wrap-up

    - by Michael Snow
    We've had a very busy week here at Oracle and as we build up to Oracle OpenWorld starting in less than 10 days - it doesn't look like things will be slowing down. Engagement is definitely in the air this week. Our friend, John Mancini published a great article entitled: "The World of Engagement" on his Digital Landfill blog yesterday and we hosted a great webcast with R "Ray" Wang from Constellation Research yesterday on the "9 C's of Engagement". 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} I wanted to wrap-up the week with some key takeaways from our webcast yesterday with Ray Wang. If you missed the webcast yesterday, fear not - it is now available  On-Demand. We'll leave you this week with lots of questions about how to navigate these churning waters of engagement. Stay tuned to the Oracle WebCenter Social Business Thought Leaders Webcast Series as we fuel this dialogue. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Company Culture Does company support a culture of putting customer satisfaction ahead of profits? Does culture promote creativity and cross functional employee collaboration? Does culture accept different views of multi-generational workforce? Does culture promote employee training and skills development Does culture support upward mobility and long term retention? Does culture support work-life balance? Does the culture provide rewards for employee for outstanding customer support? Channels What are the current primary channels for customer communications? What do you think will be the primary channels in two years? Is company developing support model for emerging channels? Do all channels consistently deliver the same level of customer support? Do you know the cost per transaction across all channels? Do you engage customers proactively across multiple channels? Do all channels have access to the same customer information? Community Does company extend customer support into virtual communities of interest? Does company facilitate educating users through its virtual communities? Does company mine its customer’s experience into useful data? Does company increase the value for customers through using data to deliver new products and services? Does company support two way interactions with its customers through communities of interest? Does company actively support social CRM, online communities and social media markets? Credibility Does company market its trustworthiness through external certificates such as business licenses, BBB certificates or other validations? Does company promote trust through customer testimonials and case studies on ethical business practices? Does company promote truthful market campaigns Does company make it easy for customers to complain? Does company build its reputation for standing behind its products with guarantees for satisfaction? Does company protect its customer data with high security measures> Content What sources do you use to create customer content? Does company mine social media and blogs for customer content? How does your company sort, store and retain its customer content? How frequently does content get updated? What external sources do you use for customer content? How many responses are typically received from a knowledge management system inquiry? Does your company use customer content to design and develop new product and services? Context Does your company market to customers in clusters or individually? Does your company customize its messages and personalize them to specific needs of each individual customer? Does your company store customer data based on their past behaviors, purchases, sentiment analysis and current activities? Does your company manage customer context according to channels used? For example identify personal use channels versus business channels? What is your frequency of collecting customer activities across various touch points? How is your customer data stored and analyzed? Is contextual data used for future customer outreach? Cadence Which channels does your company measure-web site visits, phone calls, IVR, store visits, face to face, social media? Does company make effective use of cross channel marketing to promote more frequent customer engagement? Does your company rate the patterns relevant for your product or service and monitor usage against this pattern? Does your company measure the frequency of both online and offline channels? Does your company apply metrics to the frequency of customer engagements with product or services revenues? Does your company consolidate data for customer engagement across various channels for a complete view of its customer? Catalyst Does company offer coupon discounts? Does company have a customer loyalty program or a VIP membership program? Does company mine customer data to target specific groups of buyers? Do internal employees serve as ambassadors for customer programs? Does company drive loyalty through social media loyalty programs? Does company build rewards based on using loyalty data? Does company offer an employee incentive program to drive customer loyalty?

    Read the article

  • CodePlex Daily Summary for Monday, May 26, 2014

    CodePlex Daily Summary for Monday, May 26, 2014Popular ReleasesClosedXML - The easy way to OpenXML: ClosedXML 0.71.1: More performance improvements. It's faster and consumes less memory.Role Based Views in Microsoft Dynamics CRM 2011: Role Based Views in CRM 2011 and 2013 - 1.1.0.0: Issues fixed in this build: 1. Works for CRM 2013 2. Lookup view not getting blockedSimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesSimple Connect To Db: SimpleConnectToDb_v1: SimpleConnectToDb_v1CRM 2011 / CRM 2013 Form Helper: v2014.05.25: v2014.05.25 Added PhoneFormat & PhoneFormatAreaCode v2014.05.24 Initial ReleaseCreate Word documents without MS Word: Release 3.0: Add support for Sections, Sections Headers and Footers and right to left languages.Corporate News App for SharePoint 2013: CorporateNewsApp v1.6.2.0: Important note This version contains a major bug fix about the generic error "Request failed. Unexpected response data from server null" This error occurs on SharePoint Online only, following an update of the Javascript API after May 2014. If you have installed this application manually in your applications company catalog, you can download the CorporateNewsApp.app file in the zip archive and update it manually. If you have installed this application directly from the SharePoint Store, it ...DevOS: DevOS: Plugin-system added Including:DevOS.exe DevOS API.dll Files must be in the some folderTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"C64 Studio: 3.5: Add: BASIC renumber function Add: !PET pseudo op Add: elseif for !if, } else { pseudo op Add: !TRACE pseudo op Add: Watches are saved/restored with a solution Add: Ctrl-A works now in export assembly controls Add: Preliminary graphic import dialog (not fully functional yet) Add: range and block selection in sprite/charset editor (Shift-Click = range, Alt-Click = block) Fix: Expression evaluator could miscalculate when both division and multiplication were in an expression without parenthesisSEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...ConEmu - Windows console with tabs: ConEmu 140523 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" Aspose for Apache POI: Missing Features of Apache POI SL - v 1.1: Release contain the Missing Features in Apache POI SL SDK in Comparison with Aspose.Slides for dealing with Microsoft Power Point. What's New ?Following Examples: Managing Slide Transitions Manage Smart Art Adding Media Player Adding Audio Frame to Slide Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.PowerShell App Deployment Toolkit: PowerShell App Deployment Toolkit v3.1.3: Added CompressLogs option to the config file. Each Install / Uninstall creates a timestamped zip file with all MSI and PSAppDeployToolkit logs contained within Added variable expansion to all paths in the configuration file Added documentation for each of the Toolkit internal variables that can be used Changed Install-MSUpdates to continue if any errors are encountered when installing updates Implement /Force parameter on Update-GroupPolicy (ensure that any logoff message is ignored) ...WordMat: WordMat v. 1.07: A quick fix because scientific notation was broken in v. 1.06 read more at http://wordmat.blogspot.com????: 《????》: 《????》(c???)??“????”???????,???????????????C?????????。???????,???????????????????????. ??????????????????????????????????;????????????????????????????。Mini SQL Query: Mini SQL Query (1.0.72.457): Apologies for the previous update! FK issue fixed and also a template data cache issue.New ProjectsASP.Net MCV4 Simplified Code Samples: This project intended to simplify the same. In this project each task is implemented with minimum lines of code to reduces complicity.Calvin: net???CodeLatino by Latinosoft: A Modified version for codeShow -- Probably taking more than a month.freeasyBackup: A free and easy to use Backup Tool for everyone. Without any cloud restrictions. freeasyExplorer: A free and easy to use File Explorer for everyone.openPDFspeedreader: #spritz #pdfreader #speedreader PDF Editor to Edit PDF Files in your ASP.NET Applications: This sample application allows the users to edit PDF files online using Aspose.Pdf for .NET.SharePoint World Cup 2013: world cup 2014SSAS Long Running Query Performance Helper: This utility helps investigate long running multidimensional or mining queries in discovery, de-parameterization and re-parameterization back to source format.

    Read the article

  • CodePlex Daily Summary for Saturday, June 07, 2014

    CodePlex Daily Summary for Saturday, June 07, 2014Popular ReleasesSFDL.NET: SFDL.NET (2.2.9.3): Changelog: Retry Bugfix (Error Counter wurde nicht korrekt zurückgesetzt) Neue Einstellung: Retry Wartezeit ist nun EinstellbarLoad Runner - HTTP Pressure Test Tool: Load Runner 1.2: 1. added support for distributed load test (read documentation for details) 2. added detail documentationAspose for Apache POI: Aspose.Words vs Apache POI WP - v 1.1: Release contain the Code Comparison for Features in Apache POI WP SDK and Aspose.Words What's New ?Following Examples: Working with Headers Working with Footers Access Ranges in Document Delete Ranges in Document Insert Before and After Ranges Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.babelua: 1.5.7.0: V1.5.7.0 - 2014.6.6Stability improvement: use "lua scripts folder" as lua search path when debugging;SEToolbox: SEToolbox 01.033.007 Release 1: Fixed breaking changes in Space Engineers in latest update. Installation of this version will replace older version.Virto Commerce Enterprise Open Source eCommerce Platform (asp.net mvc): Virto Commerce 1.10: Virto Commerce Community Edition version 1.10. To install the SDK package, please refer to SDK getting started documentation To configure source code package, please refer to Source code getting started documentation This release includes bug fixes and improvements (including Commerce Manager localization and https support). More details about this release can be found on our blog at http://blog.virtocommerce.com.NPOI: NPOI 2.1: Assembly Version: 2.1.0 New Features a. XSSFSheet.CopySheet b. Excel2Html for XSSF c. insert picture in word 2007 d. Implement IfError function in formula engine Bug Fixes a. fix conditional formatting issue b. fix ctFont order issue c. fix vertical alignment issue in XSSF d. add IndexedColors to NPOI.SS.UserModel e. fix decimal point issue in non-English culture f. fix SetMargin issue in XSSF g.fix multiple images insert issue in XSSF h.fix rich text style missing issue in XSSF i. fix cell...51Degrees - Device Detection and Redirection: 3.1.2.3: Version 3.1 HighlightsDevice detection algorithm is over 100 times faster. Regular expressions and levenshtein distance calculations are no longer used. The device detection algorithm performance is no longer limited by the number of device combinations contained in the dataset. Two modes of operation are available: Memory – the detection data set is loaded into memory and there is no continuous connection to the source data file. Slower initialisation time but faster detection performanc...CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.27.0: CodeMap now indicates the type name for all members Implemented running scripts 'as administrator'. Just add '//css_npp asadmin' to the script and run it as usual. 'Prepare script for distribution' now aggregates script dependency assemblies. Various improvements in CodeSnipptet, Autcompletion and MethodInfo interactions with each other. Added printing line number for the entries in CodeMap (subject of configuration value) Improved debugging step indication for classless scripts ...Kartris E-commerce: Kartris v2.6003: Bug fixes: Fixed issue where category could not be saved if parent categories not altered Updated split string function to 500 chars otherwise problems with attribute filtering in PowerPack Improvements: If a user has group pricing below that available through qty discounts, that will show in place of the qty discountClosedXML - The easy way to OpenXML: ClosedXML 0.72.3: 70426e13c415 ClosedXML for .Net 4.0 now uses Open XML SDK 2.5 b9ef53a6654f Merge branch 'master' of https://git01.codeplex.com/forks/vbjay/closedxml 727714e86416 Fix range.Merge(Boolean) for .Net 3.5 eb1ed478e50e Make public range.Merge(Boolean checkIntersects) 6284cf3c3991 More performance improvements when saving.DNN Blog: 06.00.07: Highlights: Enhancement: Option to show management panel on child modules Fix: Changed SPROC and code to ensure the right people are notified of pending comments. (CP-24018) Fix: Fix to have notification actions authentication assume the right module id so these will work from the messaging center (CP-24019) Fix: Fix to issue in categories in a post that will not save when no categories and no tags selectedcsv2xlsx: Source code: Source code of applicationFamily Tree Analyzer: Version 4.0.0.0-beta5: This major release introduces a significant new feature the ability to search the UK Ordnance Survey 50k Gazetteer for small placenames that often don't appear on a modern Google Map. This means significant numbers of locations that were previously not found by Google Geocoding will now be found by using OS Geocoding. In addition I've added support for loading the Scottish 1930 Parish boundary maps, whilst slightly different from the parish boundaries in use in the 1800s that are familiar to...TEncoder: 4.0.0: --4.0.0 -Added: Video downloader -Added: Total progress will be updated more smoothly -Added: MP4Box progress will be shown -Added: A tool to create gif image from video -Added: An option to disable trimming -Added: Audio track option won't be used for mpeg sources as default -Fixed: Subtitle position wasn't used -Fixed: Duration info in the file list wasn't updated after trimming -Updated: FFMpegVeraCrypt: VeraCrypt version 1.0d: Changes between 1.0c and 1.0d (03 June 2014) : Correct issue while creating hidden operating system. Minor fixes (look at git history for more details).Microsoft Web Protection Library: AntiXss Library 4.3.0: Download from nuget or the Microsoft Download Center This issue finally addresses the over zealous behaviour of the HTML Sanitizer which should now function as expected once again. HTML encoding has been changed to safelist a few more characters for webforms compatibility. This will be the last version of AntiXSS that contains a sanitizer. Any new releases will be encoding libraries only. We recommend you explore other sanitizer options, for example AntiSamy https://www.owasp.org/index....ConEmu - Windows console with tabs: ConEmu 140602 [Alpha]: ConEmu - developer build x86 and x64 versions. Written in C++, no additional packages required. Run "ConEmu.exe" or "ConEmu64.exe". Some useful information you may found: http://superuser.com/questions/tagged/conemu http://code.google.com/p/conemu-maximus5/wiki/ConEmuFAQ http://code.google.com/p/conemu-maximus5/wiki/TableOfContents If you want to use ConEmu in portable mode, just create empty "ConEmu.xml" file near to "ConEmu.exe" Tweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.CreateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweetinvi 0.9.3.1...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...New Projects.NET Extended Framework: .NET Extended FrameworkActivity Diagram: This is an attempt to parse C# Operation source and convert them into UML Activity Diagrams. The development of a suitable diagram editor is the next priorityBango Payment Flow in-app payments: Bango Payment Flow in-app paymentsDeckEditor: ?????????????????????????????????。 ?????????、CSV????、???????????。demodjango: just a demo project to record how to using djangoExport Bugs: This application will export all the bugs in a current project from TFS. It also has the option to export all attachments in the current projectFantom - Expression Based Dynamic Language Manager: Easy Dynamic Language (over MSSQL) Management for Asp.Net Web Forms. Very easy create dynamic multi language Asp.Net web site.JIRAShell: Provides PowerShell cmdlets for working with JIRA.PipeUtil: PipeUtil is a library that simplifies use the feature PIPE windows. Has the capability of client and server with threads support.Post Deployment Smoke Tester: This tool enables you to easily carry out post deployment smoke tests against your installed deployments to determine if your environment is in a good state.Quan Ly Dang Ki Internet: quan ly dang ki internetToolKD: ToolKD console application designed to change the binary code for directories that are used KitchenDraw program. Wiki Parsing Service: A web service to Parse wiki mark-up to HTML and vice versaXBee-API for .NET: This is .NET library for communication with XBee/XBee-Pro series 1 (IEEE 802.15.4) and series 2 (ZB/ZigBee Pro) OEM RF Modules. XPinterest: XPinterestZSoft-CMS: the project is going to use a new approach other than regular modular cms es.

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >