Search Results

Search found 37101 results on 1485 pages for 'array based'.

Page 403/1485 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • Ms Build publishing vs Visual Studio IDE publishing

    - by reggie
    I am currently working on ms build to publish my winform application based on the environment selected (Dev or Prod). I am using Ms Build Community Task and referencing this article to achieve this purpose. I had a few theoretical doubts based on publishing application. 1) Is there any difference in publishing through the visual studio ide and msbuild? 2) What do most developers prefer to use and why? 3) What are the advantages of using MsBuild to publish an application as compared to publishing through the visual studio IDE? 4) What is faster? I am using a .net 3.5 winform application developed in Csharp and my question is pertaining to clickonce windows applications only. Please help me clear these doubts

    Read the article

  • Is this overkill? Using MDX queries and cubes instead of SQL stored procedures

    - by Jason Holland
    I am new to Microsoft's SQL Server Analysis Services Cubes and MDX queries. Where I work we have a daily sales table in SQL Server 2005 that already contains an aggregate of sale information per store per day. At this time it contains only 164,000+ rows. We have a sales cube dedicated to this table that about 15 reports are based off of. Now, I should also note that we generate reports based on our own fiscal year criteria: a 13 period year (1 month equals 28 days etc.). Is this overkill? At what point is it justified to begin using SSAS Cubes/MDX over plain old SQL Server stored procedures? Since I have always been just using plain old SQL am I tragically late to the MDX party?

    Read the article

  • MySQL Connect 9 Days Away – Optimizer Sessions

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following my previous blog post focusing on InnoDB talks at MySQL Connect, let us review today the sessions focusing on the MySQL Optimizer: Saturday, 11.30 am, Room Golden Gate 6: MySQL Optimizer Overview—Olav Sanstå, Oracle The goal of MySQL optimizer is to take a SQL query as input and produce an optimal execution plan for the query. This session presents an overview of the main phases of the MySQL optimizer and the primary optimizations done to the query. These optimizations are based on a combination of logical transformations and cost-based decisions. Examples of optimization strategies the presentation covers are the main query transformations, the join optimizer, the data access selection strategies, and the range optimizer. For the cost-based optimizations, an overview of the cost model and the data used for doing the cost estimations is included. Saturday, 1.00 pm, Room Golden Gate 6: Overview of New Optimizer Features in MySQL 5.6—Manyi Lu, Oracle Many optimizer features have been added into MySQL 5.6. This session provides an introduction to these great features. Multirange read, index condition pushdown, and batched key access will yield huge performance improvements on large data volumes. Structured explain, explain for update/delete/insert, and optimizer tracing will help users analyze and speed up queries. And last but not least, the session covers subquery optimizations in Release 5.6. Saturday, 7.00 pm, Room Golden Gate 4: BoF: Query Optimizations: What Is New and What Is Coming? This BoF presents common techniques for query optimization, covers what is new in MySQL 5.6, and provides a discussion forum in which attendees can tell the MySQL optimizer team which optimizations they would like to see in the future. Sunday, 1.15 pm, Room Golden Gate 8: Query Performance Comparison of MySQL 5.5 and MySQL 5.6—Øystein Grøvlen, Oracle MySQL Release 5.6 contains several improvements in the query optimizer that create improved performance for complex queries. This presentation looks at how MySQL 5.6 improves the performance of many of the queries in the DBT-3 benchmark. Based on the observed improvements, the presentation discusses what makes the specific queries perform better in Release 5.6. It describes the relevant new optimization techniques and gives examples of the types of queries that will benefit from these techniques. Sunday, 4.15 pm, Room Golden Gate 4: Powerful EXPLAIN in MySQL 5.6—Evgeny Potemkin, Oracle The EXPLAIN command of MySQL has long been a very useful tool for understanding how MySQL will execute a query. Release 5.6 of the MySQL database offers several new additions that give more-detailed information about the query plan and make it easier to understand at the same time. This presentation gives an overview of new EXPLAIN features: structured EXPLAIN in JSON format, EXPLAIN for INSERT/UPDATE/DELETE, and optimizer tracing. Examples in the session give insights into how you can take advantage of the new features. They show how these features supplement and relate to each other and to classical EXPLAIN and how and why the MySQL server chooses a particular query plan. You can check out the full program here as well as in the September edition of the MySQL newsletter. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Fun With the Chrome JavaScript Console and the Pluralsight Website

    - by Steve Michelotti
    Originally posted on: http://geekswithblogs.net/michelotti/archive/2013/07/24/fun-with-the-chrome-javascript-console-and-the-pluralsight-website.aspxI’m currently working on my third course for Pluralsight. Everyone already knows that Scott Allen is a “dominating force” for Pluralsight but I was curious how many courses other authors have published as well. The Pluralsight Authors page - http://pluralsight.com/training/Authors – shows all 146 authors and you can click on any author’s page to see how many (and which) courses they have authored. The problem is: I don’t want to have to click into 146 pages to get a count for each author. With this in mind, I figured I could write a little JavaScript using the Chrome JavaScript console to do some “detective work.” My first step was to figure out how the HTML was structured on this page so I could do some screen-scraping. Right-click the first author - “Inspect Element”. I can see there is a primary <div> with a class of “main” which contains all the authors. Each author is in an <h3> with an <a> tag containing their name and link to their page:     This web page already has jQuery loaded so I can use $ directly from the console. This allows me to just use jQuery to inspect items on the current page. Notice this is a multi-line command. In order to use multiple lines in the console you have to press SHIFT-ENTER to go to the next line:     Now I can see I’m extracting data just fine. At this point I want to follow each URL. Then I want to screen-scrape this next page to see how many courses each author has done. Let’s take a look at the author detail page:       I can see we have a table (with a css class of “course”) that contains rows for each course authored. This means I can get the number of courses pretty easily like this:     Now I can put this all together. Back on the authors page, I want to follow each URL, extract the returned HTML, and grab the count. In the code below, I simply use the jQuery $.get() method to get the author detail page and the “data” variable that is in the callback contains the HTML. A nice feature of jQuery is that I can simply put this HTML string inside of $() and I can use jQuery selectors directly on it in conjunction with the find() method:     Now I’m getting somewhere. I have every Pluralsight author and how many courses each one has authored. But that’s not quite what I’m after – what I want to see are the authors that have the MOST courses in the library. What I’d like to do is to put all of the data in an array and then sort that array descending by number of courses. I can add an item to the array after each author detail page is returned but the catch here is that I can’t perform the sort operation until ALL of the author detail pages have executed. The jQuery $.get() method is naturally an async method so I essentially have 146 async calls and I don’t want to perform my sort action until ALL have completed (side note: don’t run this script too many times or the Pluralsight servers might think your an evil hacker attempting a DoS attack and deny you). My C# brain wants to use a WaitHandle WaitAll() method here but this is JavaScript. I was able to do this by using the jQuery Deferred() object. I create a new deferred object for each request and push it onto a deferred array. After each request is complete, I signal completion by calling the resolve() method. Finally, I use a $.when.apply() method to execute my descending sort operation once all requests are complete. Here is my complete console command: 1: var authorList = [], 2: defList = []; 3: $(".main h3 a").each(function() { 4: var def = $.Deferred(); 5: defList.push(def); 6: var authorName = $(this).text(); 7: var authorUrl = $(this).attr('href'); 8: $.get(authorUrl, function(data) { 9: var courseCount = $(data).find("table.course tbody tr").length; 10: authorList.push({ name: authorName, numberOfCourses: courseCount }); 11: def.resolve(); 12: }); 13: }); 14: $.when.apply($, defList).then(function() { 15: console.log("*Everything* is complete"); 16: var sortedList = authorList.sort(function(obj1, obj2) { 17: return obj2.numberOfCourses - obj1.numberOfCourses; 18: }); 19: for (var i = 0; i < sortedList.length; i++) { 20: console.log(authorList[i]); 21: } 22: });   And here are the results:     WOW! John Sonmez has 44 courses!! And Matt Milner has 29! I guess Scott Allen isn’t the only “dominating force”. I would have assumed Scott Allen was #1 but he comes in as #3 in total course count (of course Scott has 11 courses in the Top 50, and 14 in the Top 100 which is incredible!). Given that I’m in the middle of producing only my third course, I better get to work!

    Read the article

  • DIY Door Lock Grants Access via RFID

    - by Jason Fitzpatrick
    If you’re looking to lighten the load on your pocket and banish the jingling of keys, this RFID-key hack makes your front door keycard accessible–and even supports groups and user privileges. Steve, a DIYer and Hack A Day reader, was looking for a solution to a simple problem: he wanted to easily give his friends access to his home without having to copy lots of keys and bulk up their key rings. Since all his friends already carried a Boston public transit RFID card the least intrusive solution was to hack his front door to support RFID cards. His Arduino-based solution can store up to 50 RFID card identifiers, supports group-based access, and thanks to a little laser cutting and stain the project enclosure blends in with the Victorian styling of his home’s facade. Hit up the link below to see his code–for a closer look at the actual enclosure check out this photo gallery. RFID Front Door Lock [via Hack A Day] HTG Explains: What is DNS? How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU

    Read the article

  • Bash arrays and case statements - review my script

    - by Felipe Alvarez
    #!/bin/bash # Change the environment in which you are currently working. # Actually, it calls the relevant 'lettus.sh' script if [ "${BASH_SOURCE[0]}" == "$0" ]; then echo "Try running this as \". chenv $1\"" exit 0 fi usage(){ echo "Usage: . ${PROG} -- Shows a list of user-selectable environments." echo " . ${PROG} [env] -- Select environment." echo " . ${PROG} -h -- Shows this usage screen." return } showEnv(){ # check if index0 exists, assume we have at least the first (zeroth) element #if [ -z "${envList}" ]; then if [ -z "${envList[0]}" ]; then echo "array \$envList is empty! " >&2 return 1 fi # Show all elements in array (0 -> n-1) for i in $(seq 0 $((${#envList[@]} - 1))); do echo ${envList[$i]} done return } setEnv(){ if [ -z "$1" ]; then usage; return fi case $1 in cold) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_cold.sh;; coles) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_coles.sh;; fc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fc.sh;; fcrm) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_fcrm.sh;; stable) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_stable.sh;; tip) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_tip.sh;; uat) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_uat.sh;; wellmdc) FILE_TO_SOURCE=/u2/tip/conf/ctrl/lettus_wellmdc.sh;; *) usage; return;; esac if $IS_SOURCED; then echo "Environment \"$1\" selected." echo "Now sourcing file \"$FILE_TO_SOURCE\"..." . ${FILE_TO_SOURCE} return else return 1 fi } main(){ if [ -z "$1" ]; then showEnv; return fi case $1 in -h) usage;; *) setEnv $1;; esac return } PROG="chenv" # create array of user-selectable environments envList=( cold coles fc fcrm stable tip uat wellmdc ) main "$@" return If I could, I'd like to get some feedback on a better way to accomplish any of the following: run through the case statement make script trivally simple to maintain/upgrade/update

    Read the article

  • Factors to consider when building an algorithm for gun recoil

    - by Nate Bross
    What would be a good algorithm for calculating the recoil of a shooting guns cross-hairs? What I've got now, is something like this: Define min/max recoil based on weapon size Generate random number of "delta" movement Apply random value to X, Y, or both of cross-hairs (only "up" on the Y axis) Multiply new delta based on time from the previous shot (more recoil for full-auto) What I'm worried about is that this feels rather predicable, what other factors should one take into account when building recoil? While I'd like it to be somewhat predictable, I'd also like to keep players on their toes. I'm thinking about increasing the min/max recoil values by a large amount (relatively) and adding a weighting, so large recoils will be more rare -- it seems like a lot of effort to go into something I felt would be simple. Maybe this is just something that needs to be fine-tuned with additional playtesting, and more playtesters? I think that it's important to note, that the recoil will be a large part of the game, and is a key factor in the game being fun/challenging or not.

    Read the article

  • Game-oriented programming language features/objectives/paradigm?

    - by Klaim
    What are the features and language objectives (general problems to solves) or paradigms that a fictive programming language targetted at games (any kind of game) would require? For example, obviously we would have at least Performance (in speed and memory) (because a lot of games simply require that), but it have a price in the languages we currently use. Expressivity might be a common feature that is required for all languages. I guess some concepts from not-usually-used-for-games paradigms, like actor-based languages, or language-based message passing, might be useful too. So I ask you what would be ideal for games. (maybe one day someone will take those answers and build a language over it? :D ) Please set 1 feature/objective/paradigm per answer. Note: maybe that question don't make sense to you. In this case please explain why in an answer. It's a good thing to have answers to this question that might pop in your head sometimes.

    Read the article

  • Oracle ERP Cloud Solution Defines Revenue Recognition Software Market

    - by Steve Dalton
    Normal 0 false false false EN-US X-NONE X-NONE Revenue is a fundamental yardstick of a company's performance, and one of the most important metrics for investors in the capital markets. So it’s no surprise that the accounting standard boards have devoted significant resources to this topic, with a key goal of ensuring that companies use a consistent method of recognizing revenue. Due to the myriad of revenue-generating transactions, and the divergent ways organizations recognize revenue today, the IFRS and FASB have been working for 12 years on a common set of accounting standards that apply to all industries in virtually all countries. Through their joint efforts on May 28, 2014 the FASB and IFRS released the IFRS 15 / ASU 2014-9 (Revenue from Contracts with Customers) converged accounting standard. This standard applies to revenue in all public companies, but heavily impacts organizations in any industry that might have complex sales contracts with multiple distinct deliverables (obligations). For example, an auto dealer who bundles free service with the sale of a car can only recognize the service revenue once the owner of the car brings it in for work. Similarly, high-tech companies that bundle software licenses, consulting, and support services on a sales contract will recognize bundled service revenue once the services are delivered. Now all companies need to review their revenue for hidden bundling and implicit obligations. Numerous time-consuming and judgmental activities must be performed to properly recognize revenue for complex sales contracts. To illustrate, after the contract is identified, organizations must identify and examine the distinct deliverables, determine the estimated selling price (ESP) for each deliverable, then allocate the total contract price to each deliverable based on the ESPs. In terms of accounting, organizations must determine whether the goods or services have been delivered or performed to the customer’s satisfaction, then either book revenue in the current period or record a liability for the obligation if revenue will be recognized in a future accounting period. Oracle Revenue Management Cloud was architected and developed so organizations can simplify and streamline revenue recognition. Among other capabilities, the solution uses business rules to efficiently identify and examine contracts, intelligently calculate and allocate deliverable prices based on prescribed inputs, and accurately recognize revenue for each deliverable based on customer satisfaction. "Oracle works very closely with our customers, the Big 4 accounting firms, and the accounting standard boards to deliver an adaptive, comprehensive, new generation revenue recognition solution,” said Rondy Ng, Senior Vice President, Applications Development. “With the recently announced IFRS 15 / ASU 2014-9, Oracle is ready to support customer adoption of the new standard with our Revenue Management Cloud,” said Rondy. Oracle Revenue Management Cloud, an integral part of Oracle Financials Cloud, helps organizations comply with accounting standards, provides them with confidence that reported revenue is materially accurate, and simplifies the accounting process for revenue recognition. Stay tuned to this blog for regular updates on Oracle Revenue Management Cloud. We also invite you to review our new oracle.com ERP pages @ oracle.com/erp. We will be updating these pages very soon with more information about Oracle Revenue Management Cloud.

    Read the article

  • Webcast - Social BPM: Integrating Enterprise 2.0 with Business Applications

    - by peggy.chen
    In today's fast-paced marketplace, successful companies rely on agile business processes and collaborative work environments to stay ahead of the competition. By making your application-based business processes visible, shareable, and flexible through dynamic, process-aware user interfaces, you can ensure that your team's best ideas are heard-and implemented quickly. Join us for this complimentary live Webcast and learn how Oracle's business process management (BPM) solution with integrated Enterprise 2.0 capabilities will enable your team to: Embed ad hoc collaboration into your structured processes and gain a unified view of enterprise information-across business functions-for effective and efficient decision-making Reach out to an expanded network for expert input in resolving exceptions in business workflows Add social feedback loops to your enterprise applications and continuously improve business processes Join us for this LIVE Webcast tomorrow as we discuss how business process management with integrated Enterprise 2.0 collaboration improves business responsiveness and enhances overall enterprise productivity. Take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. Register for the webcast now!

    Read the article

  • How should I log time spent on multiple tasks?

    - by xenoterracide
    In Joel's blog on evidence based scheduling he suggests making estimates based on the smallest unit of work and logging extra work back to the original task. The problem I'm now experiencing is that I'll have create object A with subtask method A which creates object B and test all of the above. I create tasks for each of these that seems to be resulting in ok-ish estimates (need practice), but when I go to log work I find that I worked on 4 tasks at once because I tweak method A and find a bug in the test and refactor object B all while coding it. How should I go about logging this work? should I say I spent, for example, 2 hours on each of the 4 tasks I worked on in the 8 hour day?

    Read the article

  • Oracle's Thirteen Engineered Systems

    - by Luis Moreno Campos
    You already need a catalogue to keep up with the many new stuff coming out from Oracle Engineered from factory.In the Exadata portfolio you have 4 systems:- Quarter Rack X2-2 Database Machine- Half-Rack X2-2 Database Machine- Full-Rack X2-2 Database Machine- X2-8 Database MachineBut if Exadata presents a stunning portfolio, Exalogic doesn't fall behind on that by putting out 6 versions: 3 sizes (Quarter, Half and Full) with x86 processors and the same 3 sizes with SPARC based processors.Finally we have 3 new systems called SPARC Superclusters where Solaris 11 was re-engineered to take more out of the power of Infiniband: "Available in the next calendar year, the Oracle SPARC Supercluster will be available in T3-2, T3-4 and M5000-based configurations".I see Oracle delivering on it's promise to tightly integrate Hardware and Software to work closer together.

    Read the article

  • Thinktecture.IdentityServer RC

    - by Your DisplayName here!
    I just uploaded the RC of IdentityServer to Codeplex. This release is feature complete and if I don’t get any bug reports this is also pretty much the final V1. Changes from B1 The configuration data access is now based on EF 4.1 code first. This makes it much easier to use different data stores. For RTM I will also provide a SQL script for SQL Server so you can move the configuration to a separate machine (e.g. for load balancing scenarios). I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. Certificate based endpoint identities for SSL endpoints are optional now. Added a initial configuration “wizard”. This sets up the signing certificate, issuer URI and site title on the first run. Installation This is still a “developer” release – that means it ships with source code that you have to build it etc. But from that point it should be a little more straightforward as it used to be: Make sure SSL is configured correctly for IIS Map the WebSite directory to a vdir in IIS Run the web site. This should bring up the initial configuration Make sure the worker process account has access to the signing certificate private key Make sure all your users are in the “IdentityServerUsers” role in your role store. Administrators need the “IdentityServerAdministrators” role That should be it. A proper documentation will be hopefully available soon (any volunteers?). Please provide feedback! thanks!

    Read the article

  • How To Build An Enterprise Application - Introduction

    - by Tuan Nguyen
    An enterprise application is a software which fulfills 4 core quality attributes: Reliability Flexibility Reusability Maintainability Reliability is the ability of a system or component to perform its required functions under stated conditions for a specific period of time. Because there are no ways more than testing to make sure a system is reliability, we can exchange the term reliability with the term testability. Flexibility is the ability of changing a system's core features without violating unrelated features or components. Although flexibility can helps us to achieve interoperability easily but the opposite is not true. For example, a program might run on multiple platforms, contains logic for many scenarios but that wouldn't mean it was flexibility if it forces us rewrite code in all components when we just want to change some aspects of a feature it had. Reusability is the ability of sharing one or more system's components for another system. We should just open a component's reusability in the context in which it is used. For example, we write classes that implement UI logic and deliver them to only classes which implementing UI. Maintainability is the ability of adding or removing features to a system after it was released. Maintainability consists of many factors such as readability, analyzability, extensibility therein extensibility is critical. Maintainability requires us to write code that is longer and complexer than normal but it doesn't mean we introduce unneccessarily complex code. We always try to make our code clear and transparent to everyone. An application enterprise is built on an enterprise design which consists of two parts: low-level design and high-level design. At low-level design, it focuses on building loose-coupled classes or components. Particularly, it recommends: Each class or component undertakes only single responsibility (design based on unit test) Classes or components implement and work through interfaces (design based on contract) Dependency relationship between classes and components could be injected at run-time (design based on dependency) At high-level design, it focuses on architecting system into tiers and layers. Particularly, it recommends: Divide system into subsystems for deployment. Each subsytem is called a tier. Typical, an enterprise application would have 3 tiers as illustrated in the following figure: Arrange classes and components to logical containers called layers. Typical, an enterprise application would have 5 layers as illustrated in the following figure

    Read the article

  • Raid 5 mdadm Problem - Help Please

    - by user66260
    My Raid 5 array (4 1tb Disks WD10EARS) had was showing as degraded. I looked and one of the disks wasnt installed, so i re-added it with the mdadm add command. the array is now showing as (null)Array , but cant be mounted if i run: root@warren-P5K-E:/home/warren# sudo mdadm --misc --detail /dev/md0 I get: mdadm: cannot open /dev/md0: No such file or directory and running: root@warren-P5K-E:/home/warren# cat /proc/mdstat gives me: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] unused devices: < none > The data is very important root@warren-P5K-E:/home/warren# mdadm --examine /dev/sda /dev/sda: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b792 - correct Events : 1 Number Major Minor RaidDevice State this 1 8 0 1 spare /dev/sda 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# mdadm --examine /dev/sdb /dev/sdb: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7a0 - correct Events : 1 Number Major Minor RaidDevice State this 0 8 16 0 spare /dev/sdb 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# oot@warren-P5K-E:/home/warren# mdadm --examine /dev/sdc /dev/sdc: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7b4 - correct Events : 1 Number Major Minor RaidDevice State this 2 8 32 2 spare /dev/sdc 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd root@warren-P5K-E:/home/warren# mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 0.90.00 UUID : 00000000:00000000:00000000:00000000 Creation Time : Sat May 26 12:08:14 2012 Raid Level : -unknown- Raid Devices : 0 Total Devices : 4 Preferred Minor : 0 Update Time : Sat May 26 12:08:40 2012 State : active Active Devices : 0 Working Devices : 4 Failed Devices : 0 Spare Devices : 4 Checksum : 82d5b7c6 - correct Events : 1 Number Major Minor RaidDevice State this 3 8 48 3 spare /dev/sdd 0 0 8 16 0 spare /dev/sdb 1 1 8 0 1 spare /dev/sda 2 2 8 32 2 spare /dev/sdc 3 3 8 48 3 spare /dev/sdd That on the 4 drives.

    Read the article

  • Scheduling Jobs in SQL Server Express

    As we all know SQL Server 2005 Express is a very powerful free edition of SQL Server 2005. However it does not contain SQL Server Agent service. Because of this scheduling jobs is not possible. So if we want to do this we have to install a free or commercial 3rd party product. This usually isn't allowed due to the security policies of many hosting companies and thus presents a problem. Maybe we want to schedule daily backups, database reindexing, statistics updating, etc. This is why I wanted to have a solution based only on SQL Server 2005 Express and not dependent on the hosting company. And of course there is one based on our old friend the Service Broker.

    Read the article

  • Accenture Launches Smart Grid Data Management Platform

    - by caroline.yu
    Accenture announced today it has launched the Accenture Intelligent Network Data Enterprise (INDE), a data management platform to help utilities design, deploy and manage smart grids. INDE's functionality can be enabled by an array of third party technologies. In addition, Accenture plans to offer utilities the option of implementing the INDE solution based on a pre-configured suite of Oracle technologies. The Oracle-based version of INDE will accelerate the design of smart grids and help reduce the costs and risks associated with smart grid implementation. Stephan Scholl, Senior Vice President and General Manager of Oracle Utilities said, "Oracle and Accenture share a common vision of how the smart grid will enable more efficient energy choices for utilities and their customers. Our combined expertise in delivering mission-critical smart grid applications, security, data management and systems integration can help accelerate utilities toward a more intelligent network now and as future needs arise." For the full press release, click here.

    Read the article

  • Blending textures together, texture fade over / fade in

    - by Deukalion
    What is the best way to render a texture overlapping effect? Like in this example: I want either the grass to fade in to the snow texture, or the other way around. No rough edges. Somehow make them blend over. So the grass has a bit of snow or the snow has a bit of grass How is this possible during runtime? If that's possible. I don't render this by using the SpriteBatch, since the ground isn't rectangles (they can be moved). This is the way I render each shape (each one of those squares): // LoadTexture // Apply EffectPass device.DrawUserIndexedPrimitives<VertexPositionNormalTexture> ( PrimitiveType.TriangleList, render.Item.Points, // Array of VertexPositionNormalTexture 0, render.Item.Points.Length, render.Item.Indexes, // Array of int indexes (triangulation) 0, render.Item.Indexes.Length / 3, VertexPositionNormalTexture.VertexDeclaration );

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • How Circuit Boards Are Manufactured and Tested [Video]

    - by Jason Fitzpatrick
    Circuit boards are in nearly everything: computers, cars, toys, phones, even greeting cards. Check out this tour of Printed Circuit Board (PCB) factory to see how they’re made. In the above video the owners of Base2 Electronics are watching a PCB testing machine at the factory where they purchase their boards for resale. The machine is first scanning the board to identify it in the board database and then the arms start flying as it tests individual circuits on the board. If you’re interested seeing all the steps of the manufacturing process, hit up the link below for a photo and video tour of the facility. Base2 Electronics Tour of Advanced Circuits [via Hack A Day] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • Codeigniter Open Source Applications for Coding References

    - by Hafizul Amri
    I choose codeigniter to be my first framework to work with. Now I'm looking for open source applications built based on this framework to be my references for good coding practices and standards. From my previous experiences in application development, it was hard for me to maintain, upgrade, or modify existing applications due to my bad coding practices. Do you have any suggestion of application based on codeigniter framework for me to be referred? So that it can help me to write better coding by referring to their good and maybe best coding pratices. Thank you!

    Read the article

  • How to extract a record in a text on string match in a file using bash

    - by private
    Hi I have a text file sample.txt as =====record1 title:javabook price:$120 author:john path:d: =====record2 title:.netbook author:paul path:f: =====record3 author:john title:phpbook subject:php path:f: price:$150 =====record4 title:phpbook subject:php path:f: price:$150 from this I want to split the data based on author, it should split into 2 files which contains test1.txt =====record1 title:javabook price:$120 author:john path:d: =====record3 author:john title:phpbook subject:php path:f: price:$150 and test2.txt =====record2 title:.netbook author:paul path:f: like above I want to classify the main sample.txt file into sub files based on author field dynamically. Please suggest me a way to do it.

    Read the article

  • Postfix and tmpfs for /var/spool

    - by Rob Fisher
    My main disk is an SSD so in order to preserve its lifetime by reducing writes I followed some advice and made /var/spool a ram disk by adding this line to /etc/fstab: tmpfs /var/spool tmpfs defaults,noatime,mode=1777 0 0 Later I configured postfix because I have a RAID array on my system and mdadm wants to send me email if the RAID array fails which sounds like a fine idea. Email sending worked fine until I rebooted, at which point: postfix: fatal: open /etc/postfix-out/main.cf: No such file or directory The fix for this is apparently: mkdir /var/spool/postfix postfix check Then I found I also had to do: mkfifo /var/spool/postfix/public/pickup service postfix restart Now sending emails works fine...until the next reboot. So: what is the most correct way to recreate the contents of /var/spool/postfix automatically at boot time if it does not exist? I am using Ubuntu Server 12.04.

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >