Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 557/1018 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • deleting old unused images

    - by Ayyash
    As we move on with our content-based websites, lots of images get dumped in our images folder, but we rarely come across self-committed monkyes that delete their files once they do not need it, which means, we end up with a huge list of images in one folder, and it is very tricky to clean it up. My question is (and i dont know if this is the right website to ask it), is there a tool that allows me to find out if an image has been requested by web in the last (n) months? my other general question is, how do you do it? how do you take control of your images folders? what policy do you enforce on developers to clean up? what measures do you take in order to decide what goes and what stays if you end up with an out-of-control situation? my suggestion was to rename the images folder, create a new one, copy the basic ones and wait for someone to complain about a broken image! :) i find this to be the most efficient.

    Read the article

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • Platform for Efficiency: Boeing Defense, Space & Security integrates supply chain processes using Oracle Business Process Management solutions. by Fred Sandsmark

    - by JuergenKress
    Like most companies, aerospace giant Boeing has its jargon - words and phrases that uniquely define its products and processes. Take the word platform. It is used at Boeing to mean a family of aircraft - the F/A-18 fighter, for example, or the 777 jetliner. Boeing Defense, Space & Security since August 2009, employees in the Global Services & Support (GS&S) division of Boeing Defense, Space & Security have been talking about a different sort of platform: a supply chain technology platform, based on Oracle Business Process Management (Oracle BPM) solutions and Oracle SOA Suite. That platform, built with the assistance of Oracle Diamond Partner Capgemini, is serving as a jumping-off point for Boeing's GS&S staff to deploy radically improved business processes supported by Oracle Fusion Applications to build a high-visibility, end-to-end supply chain. This business process-driven technology platform has ambitious goals: to help GS&S respond more quickly and accurately to its customers' needs, to make business processes at all GS&S sites more consistent and less expensive, and to create a foundation for further improvement and efficiency. Read the full article here. Want to publish your BPM11g success story - request for a partner/customer reference? BPM Center of Excellent & First 100 Days of BPM documents to our SOA Community Workspace MWD_bpm_si_Centre_of_Excellence_0811.pdf First 100 Days of BPM whitepaper.pdf Please visit our SOA Community Workspace (SOA Community membership required). SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: BPM,BPM reference,BPM Capgemini,BPM first 100 days,BPM center of Excellence,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to diagnose and fix Kernel Panic Fatal Machine Check error?

    - by 0x4a6f4672
    I have got a new Samsung Series 7 laptop with dual boot setup for Windows 8 and Ubuntu 12.10. A fine machine comparable to a Macbook Pro. The Ubuntu installation was quite a hassle, but with the help of Boot Repair finally it seemed to work. Or so I thought. Windows 8 starts fine, but if I want to start Ubuntu regularly the following Machine Check Exception error occurs, quite similar to this one [Hardware Error] CPU 1: Machine Check Exception: 5 Bank 6 [Hardware Error] RIP !inexact! 33 <00007fab2074598a> [Hardware Error] TSC 95b623464c ADDR fe400 MISC 3880000086 .. [similar messages for CPU 2,3 and 0] .. [Hardware Error] Machine Check: Processor context corrupt Kernel panic - not syncing: Fatal Machine Check Rebooting in 30 seconds Kernel panic does not sound good. Then it starts to reboot, and the second boot trial often works. Is it a Kernel or driver problem? The laptop has an Intel Core i7 processor. I already deactivated Hyperthreading in the BIOS, but it does not seem to help :-( I also disabled the Execute Disable Bit (EDB) flag in the BIOS. EDB is an Intel hardware-based security feature that can help reduce system exposure to viruses and malicious code. Since I disabled it, the error did occur less frequently, but it still appears occasionally :-( It seems to be the same error as described here and here. Maybe a Samsung specific Kernel problem? A similar error also happens on a Samsung Ultrabook Series 9 (which seems to be kernel bugs 49161 and 47121). At my Samsung Series 7, it still occurs for instance during booting on battery after "Checking battery state". Perhaps anyone else has an idea? These Kernel Panic errors are reallly annoying..

    Read the article

  • What You Said: How You Track Your Time

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite time tracking tips, tricks, and tools. Now we’re back to highlight the techniques HTG readers use to keep tabs on their time. While more than one of you expressed confusion over the idea of tracking how you spend all your time, many of you were more than happy to share the reasons for and the methods you use to stay on top of your time expenditures. Scott uses a fluid and flexible project management tool: I use kanbanflow.com, with two boards to manage task prioritisation and backlog. One board called ‘Current Work’ has three columns ‘Do Today’, ‘In Progress’ and ‘Done’. The other is called ‘Backlog’, which splits tasks into priority groups – ‘Distractions (NU+NI)’, ‘Goals (NU+I)’, ‘Interruptions (U+NI)’, ‘Interruptions (U+NI)’ and ‘Critical (U+I)’, where U is Urgent and I is Important (and N is Not). At the end of each day, I move things from my Backlog to my ‘Current Work’ board, with the idea to keep complete Goals before they become Critical. That way I can focus on ‘Current Work’ Do Today so I don’t feel overwhelmed and can plan my day. As priorities change or interruptions pop up, it’s just a matter of moving tasks between boards. I have both tabs open in my browser all day – this is probably good for knowledge workers strapped to their desk, not so good for those in meetings all day. In that case, go with the calendar on your phone. While the above description might make it sound really technical, we took the cloud-based app for a spin and found the interface to be very flexible and easy to use. Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • Linux router and firewall with IP accounting

    - by Andrew
    I'm working on a project to replace my organisation's aging Slackware gateway/router/firewall machine in our colo rack. Previously we used rc.firewall but we are now looking for something more modern and easily configurable. The requirements are: Act as a gateway router & firewall Port forwarding to a Terminal Server in the colo IP/traffic accounting, preferably accessible via SNMP (already using cacti for other servers) Possibility of acting as a PPTP server & routing these connections Is not an out-of-the-box Cisco product (don't have the finances or support to maintain it) I'd prefer to use Ubuntu or some other Debian-based distro but something that integrates everything we're looking for is certainly an option if it offers all the desired features and is easy to configure. Is there a simple set of packages that will provide me with the Firewall & Accounting features, or am I best served with a custom-built distro / other solution?

    Read the article

  • Ruby workflow in Windows

    - by Rig
    I've done some searching and quite haven't come across the answer I am looking for. I do not think this is a duplicate of this question. I believe Windows could be a suitable development environment based on the mix of answers in that question. I have been developing in Ruby (mostly Rails but not entirely) for about a year now for personal projects on a Macbook Pro however that machine has faced an untimely death and has been replaced with a nice Windows 7 machine. Ruby development felt almost natural on the Mac after doing some research and setting up the typical stack. My environment then included the standard (Linux like) stuff built into OSX, Text Wrangler, Git, RVM, et al. Not too much of a deviation from what the 'devotees' tend to assume. Now I am setting up my new Windows box for continuing that development. What would my development environment look like? Should I just cave and run Linux in a VM? Ideally I would develop in Windows native. I am aware of the Windows Ruby installer. It seems decent but its not exactly as nice as RVM in the osx/linux world. Mercurial/Git are available so I would assume they play into the stack. Does one develop entirely in Windows? Does one run a webserver in a Linux VM and use it as a test bed while developing in Windows? Do it all in a VM? What does the standard Windows Ruby developer environment look like and what is the workflow? What would a typical step through be for adding a new feature to an ongoing project and what would the technology stack look like?

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Difference between *:80 and _default_:80 in Apache2

    - by Johannes Ernst
    I'm trying to understand the difference between the following two terms: *:80 _default_:80 in the Apache configuration file. The documentation here is unclear to me, and the only mailing list conversation that I could find here does not shed any (comprehensible, to me) light on the matter either. I have a bunch of name-based virtual hosts declared like this: <VirtualHost *:80> ServerName example.com ... and I'd like to have an entry that fires when none of those match, i.e. when a request comes in without a virtual host name, or with a virtual host name that has not been declared. Should I use *:80 or default:80?

    Read the article

  • iptables port forwarding troubleshooting

    - by cbmanica
    I'm trying to forward connections on port 18600 to port 9980. I have this in /etc/sysconfig/iptables: # Generated by iptables-save v1.3.5 on Mon Oct 21 18:30:43 2013 *nat :PREROUTING ACCEPT [2:280] :POSTROUTING ACCEPT [12:768] :OUTPUT ACCEPT [12:768] -A PREROUTING -p tcp -m tcp --dport 18600 -j REDIRECT --to-ports 9980 COMMIT # Completed on Mon Oct 21 18:30:43 2013 and /etc/init.d/iptables status shows me this: Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:18600 redir ports 9980 However, I can telnet from localhost to port 9980, but not 18600. What am I missing? (This is a CentOS-based VM.)

    Read the article

  • How to improve batching performance

    - by user4241
    Hello, I am developing a sprite based 2D game for mobile platform(s) and I'm using OpenGL (well, actually Irrlicht) to render graphics. First I implemented sprite rendering in a simple way: every game object is rendered as a quad with its own GPU draw call, meaning that if I had 200 game objects, I made 200 draw calls per frame. Of course this was a bad choice and my game was completely CPU bound because there is a little CPU overhead assosiacted in every GPU draw call. GPU stayed idle most of the time. Now, I thought I could improve performance by collecting objects into large batches and rendering these batches with only a few draw calls. I implemented batching (so that every game object sharing the same texture is rendered in same batch) and thought that my problems are gone... only to find out that my frame rate was even lower than before. Why? Well, I have 200 (or more) game objects, and they are updated 60 times per second. Every frame I have to recalculate new position (translation and rotation) for vertices in CPU (GPU on mobile platforms does not support instancing so I can't do it there), and doing this calculation 48000 per second (200*60*4 since every sprite has 4 vertices) simply seems to be too slow. What I could do to improve performance? All game objects are moving/rotating (almost) every frame so I really have to recalculate vertex positions. Only optimization that I could think of is a look-up table for rotations so that I wouldn't have to calculate them. Would point sprites help? Any nasty hacks? Anything else? Thanks.

    Read the article

  • Exam 70-630 - TS: Microsoft Office SharePoint Server 2007, Configuring

    - by DigiMortal
    It has been really quiet here but I wasted no time. I passed exam 70-630 - TS: Microsoft Office SharePoint Server 2007, Configuring and in this posting I will give you a short overview of this very-very easy exam exam. If you are not new to SharePoint Server 2007 and you have some development experiences then this is the easiest exam from Microsoft you have ever seen. There are 51 questions in this exam and two or four of them were not familiar to me. I took me about one hour to prepare for this exam and I got 964 of 1000. Okay, I have some years of experience as SharePoint developer but these questions seemed still too easy for me to be real. I mean based on this exam you cannot accurately say if somebody is able to configure SharePoint Server or not. I think this exam should be very easy also to SharePoint Server administrators who have at least some experience with supporting and maintaining production systems running on SharePoint Server 2007. Those who does not feel strong on SharePoint Server configuring my read a book suggested by Microsoft Learning site: Inside Microsoft® Office SharePoint® Server 2007. Exam 70-630 gives you Microsoft Certified Technology Specialist certificate

    Read the article

  • Rasbperry Pi Mod Offers One Button Audiobook Playback

    - by Jason Fitzpatrick
    How do you design an audiobook player for an elderly book lover who doesn’t want to wrestle with new technology? Simple and with a single button interface is a great place to start. This clever and thoughtful build comes to us courtesy of tinker Michael Clemens. His wife’s grandmother, in her 90s, is visually impaired but still loves to take in books via audiobooks. In an effort to make modern MP3 audiobooks accessible to her, Michael built a dedicated audiobook reader based off Rasbperry Pi and programmed it to use a single button. The system boots, loads the audiobook it finds on the attached USB drive, and loads up its track position from memory. Press the button to resume play or, for a refresher, hold the button for four seconds to start the track over. While you may not be in the market for a one-button audiobook player for an elderly relative, the same simple design could be easily adopted, via new scripts, to another function. Hit up the link below to read more about the build. The One Button Audiobook Player [via Hack A Day] How To Play DVDs on Windows 8 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives?

    Read the article

  • Google analytics iframe code measuring visitor as two visitors

    - by Maarten
    I'm trying to measure visitors in an iframe and the site containing the iframe. What I would like is that visitors clicks in the iframe are seen being from the same visitor as the containing site, but somehow it is seen as two seperate visitors. I followed examples from http://www.blastam.com/blog/index.php/2011/02/google-analytics-cross-domain-tracking/, trimmed down to an even simpler version based on the comments about setDomainName not being needed anymore but with setDomainName I get the same result: a click on a page and a click on the iframe is seen as 2 clicks by 2 seperate visitors. This is the code in my iframe if (_gaq && gaAccount.length > 0){ _gaq.push(['_setAccount', gaAccount]); _gaq.push(['_setAllowLinker', true]); //_gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview', 'mytestcountername']); } And this is the code in the containing page: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-9605474-4']); _gaq.push(['_setAllowLinker', true]); //_gaq.push(['_setDomainName', '.domain.nl']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • Silverlight 4, MVVM and Test-Driven Development

    - by Martin Hinshelwood
    As part of his UK tour Microsoft's Jesse Liberty will be talking in Edinburgh for an evening on Silverlight 4. [Register Now, there are some places left]  The Talk MVVM and Silverlight to build test-driven programs Understanding Refactoring and Dependency Injection A Walk through of a non-trivial application The Speaker Jesse Liberty, Silverlight Geek, is a Developer Community Program Manager for Microsoft (US). Lately he has been focused on Component-based, Test-Driven, Cross-platform line-of-business application development, and has led the development of the open source  Silverlight HyperVideo Platform. Liberty is the author of over two dozen books, and his blog is a required resource for Silverlight programmers. His twenty years of programming experience include stints as a Distinguished Software Engineer at AT&T; Vice President of Human-Computer Interaction at Citibank and Software Architect at PBS/Learning Link. The Venue We are meeting at Microsoft's offices in Edinburgh in Waterloo Place. This is the building on the corner of North Bridge at the east end of Princes Street. Parking can be found at the nearby Greenside Row car park which is just off Leith Walk (used for the Omni Centre). The venue is approximately 2-3 minutes walk away from Edinburgh Waverly train station. The Agenda 18:30 Doors open 19:00 Welcome 19:10 Part 1 20:00 Break 20:10 Part 2 20:50 Feedback and Prizes 21:00 End   [Register Now, there are some places left] Technorati Tags: Silverlight,MVVM,TDD

    Read the article

  • Unable to mount portable hard drive on Ubuntu

    - by VoY
    My portable hard drive (WD My Passport), which used to work correctly now does not automount on my Ubuntu system. It does work on a Windows machine or even when plugged into WD HD TV, which is a Linux based device. There's one NTFS partition spanning the whole drive. When I plug the disk in, I see the following in dmesg: [269259.504631] usb 1-2.2: new high speed USB device using ehci_hcd and address 20 [269259.604674] usb 1-2.2: configuration #1 chosen from 1 choice However it does not mount in GNOME and I don't see it when I type: sudo fdisk -l Any suggestions why this might be? I repaired the partition using chkdsk on Windows, so the issue is probably not filesystem related.

    Read the article

  • Data transfer between"main" site and secured virtual subsite

    - by Emma Burrows
    I am currently working on a C# ASP.Net 3.5 website I wrote some years ago which consists of a "main" public site, and a sub-site which is our customer management application, using forms-based authentication. The sub-site is set up as a virtual folder in IIS and though it's a subfolder of "main", it functions as a separate web app which handles CRUD access to our customer database and is only accessible by our staff. The main site currently includes a form for new leads to fill in, which generates an email to our sales staff so they can contact them and convince them to become customers. If that process is successful, the staff manually enter the information from the email into the database. Not surprisingly, I now have a new requirement to feed the data from the new lead form directly into the database so staff can just check a box for instance to turn the lead into a customer. My question therefore is how to go about doing this? Possible options I've thought of: Move the new lead form into the customer database subsite (with authentication turned off). Add database handling code to the main site. (No, not seriously considering this duplication of effort! :) Design some mechanism (via REST?) so a webpage outside the customer database subsite can feed data into the customer database I'd welcome some suggestions on how to organise the code for this situation, preferably with extensibility in mind, and particularly if there are any options I haven't thought of. Thanks in advance.

    Read the article

  • Row Number Transformation

    The Row Number Transformation calculates a row number for each row, and adds this as a new output column to the data flow. The column number is a sequential number, based on a seed value. Each row receives the next number in the sequence, based on the defined increment value. The final row number can be stored in a variable for later analysis, and can be used as part of a process to validate the integrity of the data movement. The Row Number transform has a variety of uses, such as generating surrogate keys, or as the basis for a data partitioning scheme when combined with the Conditional Split transformation. Properties Property Data Type Description Seed Int32 The first row number or seed value. Increment Int32 The value added to the previous row number to make the next row number. OutputVariable String The name of the variable into which the final row number is written post execution. (Optional). The three properties have been configured to support expressions, or they can set directly in the normal manner. Expressions on components are only visible on the hosting Data Flow task, not at the individual component level. Sometimes the data type of the property is incorrectly set when the properties are created, see the Troubleshooting section below for details on how to fix this. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Number transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Number Transformation for SQL Server 2005 Row Number Transformation for SQL Server 2008 Row Number Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.2.0.34 – Updated installer. (25 Jun 2008) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.0.0.0 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshot Code Sample The following code sample demonstrates using the Data Generator Source and Row Number Transformation programmatically in a very simple package. Package package = new Package(); package.Name = "Data Generator & Row Number"; // Add the Data Flow Task Executable taskExecutable = package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = taskExecutable as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add Data Generator Source IDTSComponentMetaData100 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "Data Generator"; componentSource.ComponentClassID = "Konesans.Dts.Pipeline.DataGenerator.DataGenerator, Konesans.Dts.Pipeline.DataGenerator, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceSource = componentSource.Instantiate(); instanceSource.ProvideComponentProperties(); instanceSource.SetComponentProperty("RowCount", 10000); // Add Row Number Tx IDTSComponentMetaData100 componentRowNumber = dataFlowTask.ComponentMetaDataCollection.New(); componentRowNumber.Name = "FlatFileDestination"; componentRowNumber.ComponentClassID = "Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceRowNumber = componentRowNumber.Instantiate(); instanceRowNumber.ProvideComponentProperties(); instanceRowNumber.SetComponentProperty("Increment", 10); // Connect the two components together IDTSPath100 path = dataFlowTask.PathCollection.New(); path.AttachPathAndPropagateNotifications(componentSource.OutputCollection[0], componentRowNumber.InputCollection[0]); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif package.Execute(); foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } package.Dispose(); Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? Please also make sure you have installed a minimum of SP1 for SQL 2005. The IDtsPipelineEnvironmentService was added in SQL Server 2005 Service Pack 1 (SP1) (See  http://support.microsoft.com/kb/916940). If you get an error Could not load type 'Microsoft.SqlServer.Dts.Design.IDtsPipelineEnvironmentService' from assembly 'Microsoft.SqlServer.Dts.Design, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'. when trying to open the user interface, it implies that your development machine has not had SP1 applied. Very occasionally we get a problem to do with the properties not being created with the correct data type. Since there is no way to programmatically to define the data type of a pipeline component property, it can only infer it. Whilst we set an integer value as we create the property, sometimes SSIS decides to define it is a decimal. This is often highlighted when you use a property expression against the property and get an error similar to Cannot convert System.Int32 to System.Decimal. Unfortunately this is beyond our control and there appears to be no pattern as to when this happens. If you do have more information we would be happy to hear it. To fix this issue you can manually edit the package file. In Visual Studio right click the package file from the Solution Explorer and select View Code, which will open the package as raw XML. You can now search for the properties by name or the component name. You can then change the incorrect property data types highlighted below from Decimal to Int32. <component id="37" name="Row Number Transformation" componentClassID="{BF01D463-7089-41EE-8F05-0A6DC17CE633}" … >     <properties>         <property id="38" name="UserComponentTypeName" …>         <property id="41" name="Seed" dataType="System.Int32" ...>10</property>         <property id="42" name="Increment" dataType="System.Decimal" ...>10</property>         ... If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • How to Be a Software Engineer?

    - by Mistrio
    My problem is kind of weird so please bear with me. I have been working in a start up concerned basically with mobile development since my graduation 2 years ago. I develop apps for iOS but it's not really relevant. The start up structure is simply founders developers, with no middle-tier technical supervision or project management whatsoever. A typical project cycle of ours is like this: meet with a client send very vague recruitment to an outsourced graphics designer dig in development right after we get the design, no questions asked then improvise improvise improvise! It's not that we are unaware that stuff like requirements analysis, UML, design patterns, source code control, testing, development methodologies... etc. exist, we just simply don't use them, and I mean like never. The result is usually a clunk of hardly-maintainable yet working code. Despite everything we are literally flourishing with many successful apps on all platforms and bigger clients each project. The thing is, we want the chaos and we're looking for advice. How would you fix our company technically? Given that you can't hire project managers or team leaders just because we are barely 5 developers, so it wouldn't be a justified cost for the founders, but one-time things like courses, books, private training... etc is an option. Lastly, if it's relevant, we are based in Egypt. Thank you a lot in advance.

    Read the article

  • SSRS/Sharepoint - Reports made in Report Builder not being list in Sharepoint Web Part

    - by Greg_the_Ant
    I followed the steps here to integrate reporting services with sharepoint in native mode. I made a page in Sharepoint with the report explorer web part and everything is working. The issue is when I create a report with the web based report builder tool, it will show up in the report manager page, but not show up in the report explorer web part on the share point page. New reports I upload using report manager do show up. Does anyone have any ideas? I'm really stuck.

    Read the article

  • Amazon how does their remarkable search work?

    - by JonH
    We are working on a fairly large CRM system /knowledge management system in asp.net. The db is SQL server and is growing in size based on all the various relationships. Upper management keeps asking us to implement search much like amazon does. Right from there search you can specify to search certain objects like outdoor equipment, clothing, etc. and you can even select all. I keep mentioning to upper management that we need to define the various fields to search on. Their response is all fields...they probably look at the search and assume that it is so simple. I'm the guy who has to say hold on guys we are talking about amazon here. My question is how can amazon run a search on an "all" category. Also one of the things management here likes is the dynamic filters. For instance, searching robot brings up filters specific to a robot toy. How can I put management in check and at least come up with search functionality that works like amazon. We are using asp.net, SQL server 2008 and jquery.

    Read the article

  • SQL 2008 Replication over Internet

    - by Akash Kava
    We have decided to put our servers in data centers on east and west coast of US, to keep high level redundancy. After evaluating number of replication options, apart from VPN there is no other way to do replication for SQL Server. We are investigating VPN but I have following questions. Our Large DB consists of media information (pictures/movies/audio/pdf) etc, so we are not very concerned about security because they are not financial sensitive data. SQL 2005 supports or can be configured to support replication over internet? If Yes then should we downgrade to 2005? If SQL 2008 Publisher is configured for Web Sync, can we write an automatic program (C# Windows Service) to act as pull subscriber and run on the subscriber server and replicate subscriber database? Or are there any API available in SQL where we can write our own program to do replication in very generic way? (In a nut shell, can we write our own C# Windows Service based Subscriber program?)

    Read the article

  • Is Centos legal?

    - by Jim B
    I'm trying to figure out if centos is legal (or simply grey). Here's what makes me wonder: They seem to go to great pains not to mention that they are based on redhat in the FAQ they mention a policy about using the redhat trademark, but the link no longer exists. When installing it it's not hard to find a lot of redhat code. I don't bother much with the linux world anymore but I had a client that was wondering about it as his auditors picked up on it and wanted to know where his license was.

    Read the article

  • Catch Me If You Can

    - by Knut Vatsendvik
    Suppose you have a Proxy based Web Service using Oracle Service Bus. In a stage in the request pipeline,  you are using a Publish action to publish the incoming message to a JMS queue using a Business Service. What if the outbound transport provider throws an exception (outside of your pipeline)? Is your pipeline able to catch the error with an error handler?? This situation could occur because of a faulty connection, suspended queue, or some other reason. Here is the Request Pipeline in our simple test case. With an Error Handler added to the message flow containing a simple Log action. By default, the Publish action will invoke the service in a fire and forget fashion. Therefore any exception that occurs in the outbound transport will go unnoticed as shown in the following Invocation Trace. So what now? In a message flow, you can apply a Routing Options action to modify any or all of the following properties in the outbound request: URI, Quality of Service, Mode, Retry parameters, Message Priority. Now add the Routing Options action to the Request Action as shown below. Click the Routing Options to display its properties in the Properties View. Select the QoS option to set the Quality of Service element. Select Exactly Once to override the default setting, and Republish the project. The invocation will now block until the message is completely processed. Trying the same test case as earlier generates the following Invocation Trace showing that the Error Handler is now triggered.

    Read the article

  • How to refresh open source software pkg manager on oldish OpenSolaris?

    - by Luke404
    I'm being presented with an OpenSolaris vps, actually a Solaris Container, which is based on SXCE snv_121 and is active since mid 2007: the good old Sun days, IIRC even before the Indiana stuff! For various reasons the system itself can't be rebuilt/upgraded but we can do whatever we want with the additional package manager on it. My Solaris skills and especially knowledge of the free package managers ecosystem is a bit rusty so I don't know what I can actually use while keeping the somewhat oldish base system. Currently there is pkg-get using some older Blastwave mirror, it has been used to install things such as Apache2, PHP, Python, Nagios. I would like to remove all the old rusty stuff and all of Blastwave, and start fresh with some newer package distribution. Can the current Blastwave system be used on that snv_121? Is there any better alternative still compatible with that system (eg. OpenCSW or anything else) ?

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >