Search Results

Search found 21556 results on 863 pages for 'control structures'.

Page 437/863 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

  • Welch's Juices-up Its Inventory Management with Oracle Supply Chain

    - by [email protected]
    Supply & Demand Chain Executive published recently a great success story about Welch's implementation of "Take Supply Chain and G.SI to work with Oracle Process Manufacturing". The company says it's been able to improve operational control, inventory accuracy, visibility and order fulfillment by automating its processes across three production/warehousing locations nationwide. Improving warehouse and inventory management operations creates efficiencies across a high-velocity nationwide supply chain Welch's production facilities were collecting more information than ever before on the flow of materials and inventory, but the company needed an effective and accurate method to organize and manage these data.   Article found at: http://www.sdcexec.com/publication/article.jsp?pubId=1&id=12256&pageNum=2     

    Read the article

  • Announcing the RTM of MvcExtensions (aka System.Web.Mvc.Extensibility)

    - by kazimanzurrashid
    I am proud to announce the v1.0 of MvcExtensions (previously known as System.Web.Extensibility). There has been quite a few changes and enhancements since the last release. Some of the major changes are: The Namespace has been changed to MvcExtensions from System.Web.Mvc.Extensibility to avoid the unnecessary confusion that it is in the .NET Framework or part of the ASP.NET MVC. The Project is now moved to CodePlex from the GitHub. The primary reason to start the project over GitHub was distributed version control which is no longer valid as CodePlex recently added the Mercurial support. There is nothing wrong with GitHub, it is an excellent place for managing your project. But CodePlex has always been the native place for .NET project. MVC 1.0 support has been dropped. I will be covering each features in my blog, so stay tuned!!!

    Read the article

  • Best PHP-based web development 'stack' of 2011

    - by Jens Roland
    I have been building PHP-based web sites for many years, and lately it seems I'm discovering another interesting new tool or method once every few weeks. This begs the question - what is the current state of the art in PHP development stacks for the seasoned coder? I'm specifically interested in the following: High-performance web server Database MVC framework Build tool Revision control Continuous Integration Automated testing Non-persistent caching I'd like to optimize my stack for scalability and rapid development. I'm not looking for personal preference here, I'm looking for real, quantifiable reasons to pick this-over-that.

    Read the article

  • Real-Time Strategy Gameplay

    - by Ahmad Alkhawaja
    I am working on building a HTML5 RTS game, and my current state is that I am building the Campaign mode of the game, and want to define the gameplay (The Scoring, Unit Behaviors/Attributes). I am searching for links/articles/books about how to define the gameplay, for me this: The scoring Figuring out levels of control (in any RTS game, there is units, individuals and squads) Unit action/attributes/properties point timing (how long it will take to play?) Achievements ..etc I want to see how they usually define these areas in RTS games, I expect to see general document discussing this concept that I can use to build the gameplay. Any idea? Is my question clear or I need to provide more details?

    Read the article

  • How do I change which audio jacks are used for input and output?

    - by yamaha1996
    I'm using a Realtek HD audio card built-in my motherboard. The Windows driver comes with a control panel that allows me to select which back panel jacks are used for what. So for example I can make both the blue jack and green jack for output and only the red one for mic-in. (Whereas by default, the blue jack is for line in, which I never need.) How can I do the same under Linux? If possible, please don't suggest something that involves PulseAudio or JACK; I'd like to do it the plain way, e.g. by editing ALSA configuration files, if possible. The way I understand it, my problem should have nothing to do with software servers redirecting streams, just instructing the driver to treat this jack as so and so because it's hardware supported. Thank you very much!

    Read the article

  • Dlink DWA-556 Access point fails to start on 2.6.35-25 while 2.6.35-24 works. How can I do this with >2.6.35-24?

    - by Azendale
    I'm using hostapd to run an access point with a Dlink DWA-556 wireless N card. However, I can no longer get it to start when I use kernels greater than 2.6.35-24. Here's a log where I ran the uname -a&&hostapd -c <configfile> on the different kernel versions. Linux erikbandersen 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 02:41:37 UTC 2010 x86_64 GNU/Linux Configuration file: hostapd.conf ctrl_interface_group=0 Opening raw packet socket for ifindex 248 BSS count 1, BSSID mask ff:ff:ff:ff:ff:ff (0 bits) SIOCGIWRANGE: WE(compiled)=22 WE(source)=21 enc_capa=0xf nl80211: Added 802.11b mode based on 802.11g information HT40: control channel: 2 secondary channel: 6 RATE[0] rate=10 flags=0x2 RATE[1] rate=20 flags=0x6 RATE[2] rate=55 flags=0x6 RATE[3] rate=110 flags=0x6 RATE[4] rate=60 flags=0x0 RATE[5] rate=90 flags=0x0 RATE[6] rate=120 flags=0x0 RATE[7] rate=180 flags=0x0 RATE[8] rate=240 flags=0x0 RATE[9] rate=360 flags=0x0 RATE[10] rate=480 flags=0x0 RATE[11] rate=540 flags=0x0 Passive scanning not supported Mode: IEEE 802.11g Channel: 2 Frequency: 2417 MHz Flushing old station entries Deauthenticate all stations Using interface wlan1 with hwaddr 1c:bd:b9:d5:e8:3c and ssid 'erikbandersen.com/freewifi' wlan1: Setup of interface done. MGMT (TX callback) ACK Malformed netlink message: len=436 left=256 plen=420 256 extra bytes in the end of netlink message MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb mgmt::auth authentication: STA=3c:4a:92:0e:41:2f auth_alg=0 auth_transaction=1 status_code=0 wep=0 New STA wlan1: STA 3c:4a:92:0e:41:2f IEEE 802.11: authentication OK (open system) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-AUTHENTICATE.indication(3c:4a:92:0e:41:2f, OPEN_SYSTEM) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-DELETEKEYS.request(3c:4a:92:0e:41:2f) authentication reply: STA=3c:4a:92:0e:41:2f auth_alg=0 auth_transaction=2 resp=0 (IE len=0) MGMT (TX callback) ACK mgmt::auth cb wlan1: STA 3c:4a:92:0e:41:2f IEEE 802.11: authenticated mgmt::assoc_req association request: STA=3c:4a:92:0e:41:2f capab_info=0x421 listen_interval=10 Validating WMM IE: OUI 00:50:f2 OUI type 2 OUI sub-type 0 version 1 QoS info 0x0 HT: STA 3c:4a:92:0e:41:2f HT Capabilities Info: 0x102c handle_assoc STA 3c:4a:92:0e:41:2f - no greenfield, num of non-gf stations 1 handle_assoc STA 3c:4a:92:0e:41:2f - 20 MHz HT, num of 20MHz HT STAs 1 hostapd_ht_operation_update current operation mode=0x0 hostapd_ht_operation_update new operation mode=0x7 changes=2 new AID 1 wlan1: STA 3c:4a:92:0e:41:2f IEEE 802.11: association OK (aid 1) MGMT (TX callback) ACK mgmt::assoc_resp cb wlan1: STA 3c:4a:92:0e:41:2f IEEE 802.11: associated (aid 1) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-ASSOCIATE.indication(3c:4a:92:0e:41:2f) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-DELETEKEYS.request(3c:4a:92:0e:41:2f) wlan1: STA 3c:4a:92:0e:41:2f RADIUS: starting accounting session 4DAC8224-00000000 MGMT (TX callback) ACK mgmt::action cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb MGMT (TX callback) ACK mgmt::proberesp cb Signal 2 received - terminating wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-DEAUTHENTICATE.indication(3c:4a:92:0e:41:2f, 1) wlan1: STA 3c:4a:92:0e:41:2f MLME: MLME-DELETEKEYS.request(3c:4a:92:0e:41:2f) Removing station 3c:4a:92:0e:41:2f hostapd_ht_operation_update current operation mode=0x7 hostapd_ht_operation_update new operation mode=0x0 changes=2 Flushing old station entries Deauthenticate all stations . Linux erikbandersen 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux Configuration file: hostapd.conf ctrl_interface_group=0 Opening raw packet socket for ifindex 248 BSS count 1, BSSID mask ff:ff:ff:ff:ff:ff (0 bits) SIOCGIWRANGE: WE(compiled)=22 WE(source)=21 enc_capa=0xf nl80211: Added 802.11b mode based on 802.11g information Allowed channel: mode=1 chan=1 freq=2412 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=2 freq=2417 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=3 freq=2422 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=4 freq=2427 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=5 freq=2432 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=6 freq=2437 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=7 freq=2442 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=8 freq=2447 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=9 freq=2452 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=10 freq=2457 MHz max_tx_power=27 dBm Allowed channel: mode=1 chan=11 freq=2462 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=1 freq=2412 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=2 freq=2417 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=3 freq=2422 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=4 freq=2427 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=5 freq=2432 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=6 freq=2437 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=7 freq=2442 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=8 freq=2447 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=9 freq=2452 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=10 freq=2457 MHz max_tx_power=27 dBm Allowed channel: mode=0 chan=11 freq=2462 MHz max_tx_power=27 dBm HT40: control channel: 2 secondary channel: 6 RATE[0] rate=10 flags=0x2 RATE[1] rate=20 flags=0x6 RATE[2] rate=55 flags=0x6 RATE[3] rate=110 flags=0x6 RATE[4] rate=60 flags=0x0 RATE[5] rate=90 flags=0x0 RATE[6] rate=120 flags=0x0 RATE[7] rate=180 flags=0x0 RATE[8] rate=240 flags=0x0 RATE[9] rate=360 flags=0x0 RATE[10] rate=480 flags=0x0 RATE[11] rate=540 flags=0x0 Passive scanning not supported Mode: IEEE 802.11g Channel: 2 Frequency: 2417 MHz Could not set channel for kernel driver wlan1: Unable to setup interface. My wireless card is listed as 02:00.0 Network controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) by lspci. Am I doing it wrong and there's a new way of doing it? I'm holding off upgrading to Natty because of this. What changed between the versions that would cause this? Should I report it as a bug?

    Read the article

  • Jupiter in Ubuntu 13.10 (Laptop Overheating)

    - by Daniel Pacheco
    I was wondering if Jupiter (Interface for display, power and device control) will work in Ubuntu 13.10, because my laptop (Toshiba Satellite C855D, AMD A6-4400M with Radeon HD Graphics running Ubuntu 13.04 x64) keeps overheating, I tried some other tools, like laptop-mode-tools or TLP, none of those work, not at all. Jupiter was the only option and it's supposedly discontinued, the version I'm using is being maintained by JoliCloud team, but they told me they're not sure if it will work with 13.10... If it doesn't work, I'm definitely not upgrading, since overheating is a major issue for me... Thanks in advance!

    Read the article

  • A good software development book

    - by Mahmoud Hossam
    I've searched this website, as well as SO for a question like that, and I still haven't found what I'm looking for. I'm looking for a book that is similar to Head First Software Development. I want to know more about the different stages of software development, I know about coding already, but I don't know much about unit testing, version control, integration, design...etc. P.S. it'd be nice if the book wasn't a thousand pages long. Edit: I'm looking for an introductory text, not a book about the latest trends in software development.

    Read the article

  • Networking gampeplay - Sending controller inputs vs. sending game actions

    - by liortal
    I'm reading about techniques for implementing game networking. Some of the resources i've read state that it is a common practice (at least for some games) to send the actual controller input across the network, to be fed into the remote game's loop for processing. This seems a bit odd to me and i'd like to know what are the benefits of using such a method? To me, it seems that controller input is merely a way to gather data to be fed into the game, which in turn determines how to translate these into specific game actions. Why would i want to send the control data and not the game actions themselves?

    Read the article

  • Silverlight Cream for January 11, 2011 -- #1024

    - by Dave Campbell
    1,000 blogposts is quite a few, but to die-hard geeks, 1000 isn't the number... 1K is the number, and today is my 1K blogpost! I've been working up to this for at least 11 months. Way back at MIX10, I approached some vendors about an idea I had. A month ago I contacted them and others, and everyone I contacted was very generous and supportive of my idea. My idea was not to run a contest, but blog as normal, and whoever ended up on my 1K post would get some swag... and I set a cut-off at 13 posts. So... blogging normally, I had some submittals, and then ran my normal process to pick up the next posts until I hit a total of 13. To provide a distribution channel for the swag, everyone on the list, please send me your snail mail (T-shirts) and email (licenses) addresses as soon as possible.   I'd like to thank the following generous sponsors for their contributions to my fun (in alphabetic order): and Rachel Hawley for contributing 4 Silverlight control sets First Floor Software and Koen Zwikstra for contributing 13 licenses for Silverlight Spy and Sara Faatz/Jason Beres for contributing 13 licenses for Silverlight Data Visualization controls and Svetla Stoycheva for contributing T-Shirts for everyone on the post and Ina Tontcheva for contributing 13 licenses for RadControls for Silverlight + RadControls for Windows Phone and Charlene Kozlan for contributing 1 combopack standard, 2 DataGrid for Silverlight, and 2 Listbox for Silverlight Standard And now finally...in this Issue: Nigel Sampson, Jeremy Likness, Dan Wahlin, Kunal Chowdhurry, Alex Knight, Wei-Meng Lee, Michael Crump, Jesse Liberty, Peter Kuhn, Michael Washington, Tau Sick, Max Paulousky, Damian Schenkelman Above the Fold: Silverlight: "Demystifying Silverlight Dependency Properties" Dan Wahlin WP7: "Using Windows Phone Gestures as Triggers" Nigel Sampson Expression Blend: "PathListBox: making data look cool" Alex Knight From SilverlightCream.com: Using Windows Phone Gestures as Triggers Nigel Sampson blogged about WP7 Gestures, the Toolkit, and using Gestures as Triggers, and actually makes it looks simple :) Jounce Part 9: Static and Dynamic Module Management Jeremy Likness has episode 9 of his explanation of his MVVM framework, Jounce, up... and a big discussion of Modules and Module Management from a Jounce perspective. Demystifying Silverlight Dependency Properties Dan Wahlin takes a page from one of his teaching opportunities, and shares his knowledge of Dependency Properties with us... beginning with what they are, defining them in code, and demonstrating their use. Customizing Silverlight ChildWindow Style using Blend Kunal Chowdhurry has a great post up about getting your Child Windows to match the look & feel of the rest of youra app... plus a bunch of Blend goodness thrown in. PathListBox: making data look cool File this post by Alex Knight in the 'holy crap' file along with the others in this series! ... just check out that cool Ticker Style Path ListBox at the top of the blog... too cool! Web Access in Windows Phone 7 Apps Wei-Meng Lee has the 3rd part of his series on WP7 development up and in this one is discussing Web Access... I mean *discussing* it... tons of detail, code, and explanation... great post. Prevent your Silverlight XAP file from caching in your browser. Michael Crump helps relieve stress on Silverlight developers everywhere by exploring how to avoid caching of your XAP in the browser... (WPFS) MVVM Light Toolkit: Soup To Nuts Part I Jesse Liberty continues his Windows Phone from Scratch series with a new segment exploring Laurent Bugnion's MVVMLight Toolkit beginning with acquiring and installing the toolkit, then proceeds to discuss linking the View and ViewModel, the ViewModel Locator, and page navigation. Silverlight: Making a DateTimePicker Peter Kuhn attacks a problem that crops up on the forums a lot -- a DateTimePicker control for Silverlight... following the "It's so simple to build one yourself" advice, he did so, and provides the code for all of us! Windows Phone 7 Animated Button Press Michael Washington took exception to button presses that gave no visual feedback and produced a behavior that does just that. Using TweetSharp in a Windows Phone 7 app Tau Sick demonstrates using TweetSharp to put a twitter feed into a WP7 app, as he did in "Hangover Helper"... all the instructions from getting Tweeetshaprt to the code necessary. Bindable Application Bar Extensions for Windows Phone 7 Max Paulousky has a post discussing some real extensions to the ApplicationBar for WP7.. he begins with a bindable application bar by Nicolas Humann that I've missed, probably because his blog is in French... and extends it to allow using DelegateCommand. How to: Load Prism modules packaged in a separate XAP file in an OOB application Damian Schenkelman posts about Prism, AppModules in separate XAPs and running OOB... if you've tried this, you know it's a hassle.. Damian has the solution. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • AJAX 4 no ASP.NET 4 Web Application

    - by renatohaddad
    Andei fazendo uns testes no AJAX Control Toolkit 4 que deverá ser usado com o ASP.NET 4 no Visual Studio .NET 2010 e confesso que gostei muito. O link para download é http://www.asp.net/ajaxlibrary/act.ashx e todas as instruções constam no site. Notei que há diversos controles novos e um que me chamou a atenção foi o de Upload assíncrono para controlar os uploads de arquivos para o server. Vale a pena estudar um pouco estas novidades. Para quem já usava o AJAX no ASP.NET 3.5, a idéia do Toolkit é igual, exceto a adição de novos controles. Com o AJAX vc pode mudar todo o comportamento da sua aplicação WEB, requisições no server passam a ser menos frequentes, o layout ajuda e muito com os controles do AJAX. Nativamente no VS 2010 já há o AJAX que a MS suporta nativamente (ScriptManager, UpdatePanel, UpdateProgress, etc), mas vale a pena implementar alguns controles do Toolkit. Bons estudos!

    Read the article

  • How can architects work with self-organizing Scrum teams?

    - by Martin Wickman
    An organization with a number of agile Scrum teams also has a small group of people appointed as "enterprise architects". The EA group acts as control and gatekeeper for quality and adherence to decisions. This leads to overlaps between the team decision and EA decisions. For instance, the team might want to use library X or want to use REST instead of SOAP, but the EA does not approve of that. Now, this can lead to frustration when team decisions are overruled. Taken far enough, it can potentially lead to a situation where the EA people "grabs" all power and the team ends up feeling demotivated and not very agile at all. The Scrum guides has this to say about it: Self-organizing: No one (not even the Scrum Master) tells the Development Team how to turn Product Backlog into Increments of potentially releasable functionality. Is that reasonable? Should the EA team be disbanded? Should the teams refuse or simply comply?

    Read the article

  • Technologies similar to Flash and Silverlight for Desktop apps

    - by M.A. Hanin
    Long story short: we use Flash as a partial GUI in our .NET desktop applications. Normally, this means that the Flash player control sits in some WinForm, playing a movie file. Changes in the real world are presented in the movie (e.g., a light-bulb turned on in the real world? a matching one will light up inside the Flash movie), and interaction with the instances in the movie will affect the real world (clicked the light-bulb? the light bulb in the real world will turn on). My question is: which technologies / products can offer me similar capabilities? Of course, I'm looking for something that can compete with Flash / Silverlight: animations, object-oriented scripting and design, powerful tools allowing the artists to design symbols conveniently, etc... static image objects won't cut it

    Read the article

  • A good software development book [closed]

    - by Mahmoud Hossam
    I've searched this website, as well as SO for a question like that, and I still haven't found what I'm looking for. I'm looking for a book that is similar to Head First Software Development. I want to know more about the different stages of software development, I know about coding already, but I don't know much about unit testing, version control, integration, design...etc. P.S. it'd be nice if the book wasn't a thousand pages long. Edit: I'm looking for an introductory text, not a book about the latest trends in software development.

    Read the article

  • Google I/O 2012 - Designing for the Other Half: Sexy Isn't Always Pink

    Google I/O 2012 - Designing for the Other Half: Sexy Isn't Always Pink Leah Busque, Sepideh Nasiri, Jess Lee, Tracy Chou, Margaret Wallace Women control 80 percent of consumer spending and drive the majority of user activity on many of the largest social networks. Female gamers over 55 spend the most time online gaming among any demographic. Are you thinking about how your product or business is attracting and engaging women? Hear from our panel on the technologies winning over female users that aren't so pink. From: GoogleDevelopers Views: 15 1 ratings Time: 59:33 More in Science & Technology

    Read the article

  • Looking for WAMP Benchmarking (my current WAMP is very slow, so are other solutions)

    - by therobyouknow
    I'm running ZWAMP WAMP stack on my local development machine. However I have found it to be very slow at serving pages from a Drupal site I have setup. By contrast, my live production site on shared hosting is reasonably quick. For me the goal with a local WAMP stack was to develop offline and send completed work to the live production site. I liked ZWAMP because it didn't require adjustments to User Access Control or other permissions. I've looked at Drupal Acquia Development Stack but found this too restrictive: only one site instance/doc root can be installed. I've looked at other DAMP stacks and heard reports of them being slow. My local development machine that I am running the WAMP stack on is a Dual Core 2.6Ghz hyperthreaded Intel i7, 4Gb RAM, 7200rpm hard disk, running Windows 64bit professional. Surely this is fast enough. So I'm looking for: Causes of the slowness of the WAMP and how to improve the speed Benchmark data of various WAMP stacks

    Read the article

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • .NET Security Part 3

    - by Simon Cooper
    You write a security-related application that allows addins to be used. These addins (as dlls) can be downloaded from anywhere, and, if allowed to run full-trust, could open a security hole in your application. So you want to restrict what the addin dlls can do, using a sandboxed appdomain, as explained in my previous posts. But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox. Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I’m not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I’m going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control. Defining assemblies as fully-trusted In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox. So now you have a class that you want the sandboxed code to call: // within assemblyWithApi public class MyApi { public static void MethodToDoThings() { ... } } // within the sandboxed dll public class UntrustedSandboxedClass { public void DodgyMethod() { ... MyApi.MethodToDoThings(); ... } } However, if you try to do this, you get quite an ugly exception: MethodAccessException: Attempt by security transparent method ‘UntrustedSandboxedClass.DodgyMethod()’ to access security critical method ‘MyApi.MethodToDoThings()’ failed. Security transparency, which I covered in my first post in the series, has entered the picture. Partially-trusted code runs at the Transparent security level, fully-trusted code runs at the Critical security level, and Transparent code cannot under any circumstances call Critical code. Security transparency and AllowPartiallyTrustedCallersAttribute So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api: [SecuritySafeCritical] public static void MethodToDoThings() { ... } However, this doesn’t solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What’s going on? By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code – as we’ll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes. This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails. So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly: [assembly: AllowPartiallyTrustedCallers] and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application. Conclusion That’s the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don’t, then you could inadvertently open a security hole. I’ll be looking at ways this can happen in my next post.

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • How to name an subclass that add a minor, detailed thing?

    - by Louis Rhys
    What is the most concise (yet descriptive) way of naming a subclass that only add a specific minor thing to the parent? I encountered this case a lot in WPF, where sometime I have to add a small functionality to an out-of-the-box control for specific cases. Example: TreeView doesn't change the SelectedItem on right-click, but I have to make one that does in my application. Some possible names are TreeViewThatChangesSelectedItemOnRightClick (way too wordy and maybe difficult to read because there is so many words concantenated together) TreeView_SelectedItemChangesOnRightClick (slightly more readable, but still too wordy and the underscore also breaks the normal convention for class names) TreeViewThatChangesSIOnRC (non-obvious acronym), ExtendedTreeView (more concise, but doesn't describe what it is doing. Besides, I already found a class called this in the library, that I don't want to use/modify in my application). LouisTreeView, MyTreeView, etc. (doesn't describe what it is doing). It seems that I can't find a name which sounds right. What do you do in situation like this?

    Read the article

  • Save Points

    - by raghu.yadav
    Explicit save point : Requires an end user action before a bounded or unbounded task flow creates a save point. For example, an end user clicks a button that invokes a method call activity that, in turn, creates a save point Implicit save point : can only originate from a bounded task flow if 1) A session times out due to end user inactivity 2) An end user logs out without saving the data 3) An end user closes the only browser window, thus logging out of the application 4) An end user navigates away from the current application using control flow rules (for example, uses a goLink component to go to an external URL) and having unsaved data. good usecases and examples given by frank/biemond and on implicit save points http://www.oracle.com/technology/products/jdev/tips/fnimphius/cancelForm/cancelForm_wsp.html?_template=/ocom/print http://biemond.blogspot.com/2008/04/automatically-save-transactions-with.html

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >