Search Results

Search found 17816 results on 713 pages for 'variable names'.

Page 639/713 | < Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >

  • Web Services Example - Part 1: Declarative

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 1 of our Web Service examples. In this posting we'll take a look at using a declarative SOAP Web Service. Getting the sample code: Just click here to download a zip of the entire project. You can unzip it and load it into JDeveloper and deploy it either to iOS or Android. Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed. Defining our Web Service: First off, we should mention that this sample code is using a public web service provided free by CDYNE Corporation that provides weather forecasts by zipcode. Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. Let's take a look at the web service.  We created this by using the "Web Service Data Control" from the New Gallery and using this link to this wsdl:  "http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL"   This web service has several methods but we're interested in GetCityForecastByZIP which takes a single string parameter for the zipcode and the second method, GetWeatherInformation that enumerates all possible forecast descriptions and associated image URLs.  The latter we'll use in the next edition but we included it here for completeness. Defing the Application: After adding a feature to the adfmf-feature.xml file, we added a taskflow to host the application flow.  This comprises of a home screen with a list with items for each method in the web service, "Forecast by Zip" and "Weather Info".  In this application we've also decided to hide the navigation bar since there is only one feature in the application. Forecast by Zip: The "Forecast By ZIP" option first presents the user with a screen where they can enter a zipcode and when the "Search" button is tapped, it executes the GetCityForecastByZIP method.  This is done by binding an Action binding to that method. The easiest way to accomplish this is to just drag & drop the method from the Data Control palette to the AMX page and drop it as a button and let the framework hook it up for you.  There is an inputText component on the page that is bound to a pageFlowScope variable called "zip".  This is used as the parameter to the Action binding when it is executed.  Because the actionListener attribute of the commandButton executes the Web Service each time, we ensure that the method is invoked every time the button is clicked. Weather Info: Unlike the previous method, this time instead of explictly executing the web service method we are using deferred invocation.  What this means is that we will bind to the results of the method and the framework will execute the method when it the data is required to be rendered.  We do this by simply doing a drag & drop of the results of the GetWeatherInformation to the AMX page.  When the page is rendered and the bindings are resolved the framework invokes the method.  This executes the method only when it is needed and fills the Data Control provider.  Because we never re-execute the method, you can click from Home to Weather Info and back many times and the web service is only ever invoked once. Issues and Possible Improvements: One thing you will quickly realize with this example is that the error handling is done by the framework for you. For simple examples this is fine but for real applications you'll want to customize these error messages.  With the declarative invocation of web services, this is difficult.  This is one aspect we'll address in the second installment of the web service examples where we will show you how to do programmatic invocation which allows you better error handling. Another issue you will notice with this example is that we can enumerate the weather information but there isn't an easy way to use that information to show the corresponding description and image as part of the forecast results.  We'll show you how to do this in the next example.

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • Guaranteed Restore Points as Fallback Method

    - by Mike Dietrich
    Thanks to the great audience yesterday in the Upgrade & Migration Workshop in Utrecht. That was really fun and I was amazed by our new facilities (and the  "wellness" lights surrounding the plenum room's walls). And another reason why I like to do these workshops is that often I learn new things from you So credits here to Rick van  Ek who has highlighted the following topic to me. Yesterday (and in some previous workshops) I did mention during the discussion about Fallback Strategies that you'll have to switch on Flashback Database beforehand to create a guaranteed restore point in case you'll encounter an issue during the database upgrade. I knew that we've made it possible since Oracle Database 11.2 to switch Flashback Database on without taking the database into MOUNT status (you could switch it off anyway while the database is open before in all releases). But before Oracle Database 11.2 that did require MOUNT status. SQL> create restore point rp1 guarantee flashback database ; create restore point rp1 guarantee flashback database * ERROR at line 1: ORA-38784: Cannot create restore point 'RP1'. ORA-38787: Creating the first guaranteed restore point requires mount mode when flashback database is off. But Rick did mention that I won't need to switch Flashback Database On to create a guaranteed restore point. And he's right - in older releases I would have had to go into MOUNT state to define the restore point which meant to restart the database. But in 11.2 that's no necessary anymore. And the same will apply when you upgrade your pre-11.2 database (e.g. an Oracle Database 10.2.0.4) to Oracle Database 11.2. As soon as you start your "old" not-yet-upgraded database in your 11.2 environment with STARTUP UPGRADE you can define a guaranteed restore point. If you tail the alert.log you'll see that the database will start the RVWR (Recovery Writer) background process - you'll just have to make sure that you'd define the values for db_recovery_file_dest_size and db_recovery_file_dest. SQL> startup upgrade ORACLE instance started. Total System Global Area  417546240 bytes Fixed Size                  2228944 bytes Variable Size             134221104 bytes Database Buffers          272629760 bytes Redo Buffers                8466432 bytes Database mounted. Database opened. SQL> create restore point grpt guarantee flashback database; Restore point created.SQL> drop restore point grpt; And don't forget to drop that restore point the sooner or later as it is guaranteed - and will fill up your Fast Recovery Area pretty quickly Just on the side: in any case archivelog mode is required if you'd like to work with restore points. - Mike

    Read the article

  • Call For Papers Tips and Tricks

    - by speakjava
    This year's JavaOne session review has just been completed and by now everyone who submitted papers should know whether they were successful or not.  I had the pleasure again this year of leading the review of the 'JavaFX and Rich User Experiences' track.  I thought it would be useful to write up a few comments to help people in future when submitting session proposals, not just for JavaOne, but for any of the many developer conferences that run around the world throughout the year.  This also draws on conversations I recently had with various Java User Group leaders at the Oracle User Group summit in Riga.  Many of these leaders run some of the biggest and most successful Java conferences in Europe. Try to think of a title which will sound interesting.  For example, "Experiences of performance tuning embedded Java for an ARM architecture based single board computer" probably isn't going to get as much attention as "Do you like coffee with your dessert? Java on the Raspberry Pi".  When thinking of the subject and title for your talk try to steer clear of sessions that might be too generic (and so get lost in a group of similar sessions).  Introductory talks are great when the audience is new to a subject, but beware of providing sessions that are too basic when the technology has been around for a while and there are lots of tutorials already available on the web. JavaOne, like many other conferences has a number of fields that need to be filled in when submitting a paper.  Many of these are selected from pull-down lists (like which track the session is applicable to).  Check these lists carefully.  A number of sessions we had needed to be shuffled between tracks when it was thought that the one selected was not appropriate.  We didn't count this against any sessions, but it's always a good idea to try and get the right one from the start, just in case. JavaOne, again like many other conferences, has two fields that describe the session being submitted: abstract and summary.  These are the most critical to a successful submission.  The two fields have different names and that is significant; a frequent mistake people make is to write an abstract for a session and then duplicate it for the summary.  The abstract (at least in the case of JavaOne) is what gets printed in the show guide and is typically what will be used by attendees when deciding what sessions to attend.  This is where you need to sell your session, not just to the reviewers, but also the people who you want in your audience.  Submitting a one line abstract (unless it's a really good one line) is not usually enough to decide whether this is worth investing an hour of conference time.  The abstract typically has a limit of a few hundred characters.  Try to use as many of them as possible to get as much information about your session across.  The summary should be different from the abstract (and don't leave it blank as some people do).  This field is where you can give the reviewers more detail about things like the structure of the talk, possible demonstrations and so on.  As a reviewer I look to this section to help me decide whether the hard-sell of the title and abstract will actually be reflected in the final content.  Try to make this comprehensive, but don't make it excessively long.  When you have to review possibly hundreds of sessions a certain level of conciseness can make life easier for reviewers and help the cause of your session. If you've not made many submissions for talks in the past, or if this is your first, try to give reviewers places to find background on you as a presenter.  Having an active blog and Twitter handle can also help reviewers if they're not sure what your level of expertise is.  Many call-for-papers have places for you to include this type of information.  It's always good to have new and original presenters and presentations for conferences.  Hopefully these tips will help you be successful when you answer the next call-for-papers.

    Read the article

  • Teminal non-responsive on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • Getting Help with 'SEPA' Questions

    - by MargaretW
    What is 'SEPA'? The Single Euro Payments Area (SEPA) is a self-regulatory initiative for the European banking industry championed by the European Commission (EC) and the European Central Bank (ECB). The aim of the SEPA initiative is to improve the efficiency of cross border payments and the economies of scale by developing common standards, procedures, and infrastructure. The SEPA territory currently consists of 33 European countries -- the 28 EU states, together with Iceland, Liechtenstein, Monaco, Norway and Switzerland. Part of that infrastructure includes two new SEPA instruments that were introduced in 2008: SEPA Credit Transfer (a Payables transaction in Oracle EBS) SEPA Core Direct Debit (a Receivables transaction in Oracle EBS) A SEPA Credit Transfer (SCT) is an outgoing payment instrument for the execution of credit transfers in Euro between customer payment accounts located in SEPA. SEPA Credit Transfers are executed on behalf of an Originator holding a payment account with an Originator Bank in favor of a Beneficiary holding a payment account at a Beneficiary Bank. In R12 of Oracle applications, the current SEPA credit transfer implementation is based on Version 5 of the "SEPA Credit Transfer Scheme Customer-To-Bank Implementation Guidelines" and the "SEPA Credit Transfer Scheme Rulebook" issued by European Payments Council (EPC). These guidelines define the rules to be applied to the UNIFI (ISO20022) XML message standards for the implementation of the SEPA Credit Transfers in the customer-to-bank space. This format is compliant with SEPA Credit Transfer version 6. A SEPA Core Direct Debit (SDD) is an incoming payment instrument used for making domestic and cross-border payments within the 33 countries of SEPA, wherein the debtor (payer) authorizes the creditor (payee) to collect the payment from his bank account. The payment can be a fixed amount like a mortgage payment, or variable amounts such as those of invoices. The "SEPA Core Direct Debit" scheme replaces various country-specific direct debit schemes currently prevailing within the SEPA zone. SDD is based on the ISO20022 XML messaging standards, version 5.0 of the "SEPA Core Direct Debit Scheme Rulebook", and "SEPA Direct Debit Core Scheme Customer-to-Bank Implementation Guidelines". This format is also compliant with SEPA Core Direct Debit version 6. EU Regulation #260/2012 established the technical and business requirements for both instruments in euro. The regulation is referred to as the "SEPA end-date regulation", and also defines the deadlines for the migration to the new SEPA instruments: Euro Member States: February 1, 2014 Non-Euro Member States: October 31, 2016. Oracle and SEPA Within the Oracle E-Business Suite of applications, Oracle Payables (AP), Oracle Receivables (AR), and Oracle Payments (IBY) provide SEPA transaction capabilities for the following releases, as noted: Release 11.5.10.x -  AP & AR Release 12.0.x - AP & AR & IBY Release 12.1.x - AP & AR & IBY Release 12.2.x - AP & AR & IBY Resources To assist our customers in migrating, using, and troubleshooting SEPA functionality, a number of resource documents related to SEPA are available on My Oracle Support (MOS), including: R11i: AP: White Paper - SEPA Credit Transfer V5 support in Oracle Payables, Doc ID 1404743.1R11i: AR: White Paper - SEPA Core Direct Debit v5.0 support in Oracle Receivables, Doc ID 1410159.1R12: IBY: White Paper - SEPA Credit Transfer v5 support in Oracle Payments, Doc ID 1404007.1R12: IBY: White Paper - SEPA Core Direct Debit v5 support in Oracle Payments, Doc ID 1420049.1R11i/R12: AP/AR/IBY: Get Help Setting Up, Using, and Troubleshooting SEPA Payments in Oracle, Doc ID 1594441.2R11i/R12: Single European Payments Area (SEPA) - UPDATES, Doc ID 1541718.1R11i/R12: FAQs for Single European Payments Area (SEPA), Doc ID 791226.1

    Read the article

  • The gestures of Windows 8 (Consumer preview): part 2, More about Search

    - by Laurent Bugnion
    This is part 2 of a multipart blog post about the gestures and shortcuts in Windows 8 consumer preview. Part 1 can be found here! More about the Search charm In the first installment of this series, we talked about the charms and mentioned a few gestures to display the Search charm. Search is a very central and powerful feature in Windows 8, and allows you to search in Apps, Settings, Files and within Metro applications that support the Search contract. There are a few cool features around the Search, and especially the applications associated to it. I already mentioned the keyboard shortcuts you can use: Win-C shows the Charms bar (same as swiping from the right bevel towards the center of the screen). Win-Q open the Search fly out with Apps preselected. Win-W open the Search fly out with Settings preselected. Win-F open the Search fly out with Files preselected. Searching in Metro apps In addition to these three search domains, you can also search a Metro app, as long as it supports the Search contract (check this Build video to learn more about the Search contract). These apps show up in the Search flyout as shown here: Notice the list of apps below the Files button? That’s what we are talking about. First of all, the list order changes when you search in some applications. For instance, in the image above, I had used the Store with the Search charm. This is why the store shows up as the first app. I am not 100% what algorithm is used here (sorting according to number of searches is my guess), but try it out and try to figure it out Applications that have never been searched are sorted alphabetically. Does it mean we will see cool app names like ___AAA_MyCoolApp? I certainly hope not!! Pinning You can also pin often used apps to the Search flyout. To pin an app with the mouse, right click on it in the Search flyout and select Pin from the context menu. With the keyboard, use the arrow keys to go down to the selected app, and then open the context menu. With the finger, simply tap and hold until you see a semi transparent rectangle indicating that the context menu will be shown, then release. The context menu opens up and you can select Pin. Pin context menu Pinned apps Unpinning, Hiding Using the same technique as for pinning here above, you can also unpin a pinned application. Finally, you can also choose to hide an app from the Search flyout altogether. This is a convenient way to clean up and make it easy to find stuff. Note: At this point, I am not sure how to re-add a hidden app to the Search flyout. If anyone knows, please mention it in the comments, thanks! Reordering You can also reorder pinned apps. To do this, with the finger, tap, hold and pull the app to the side, then pull it vertically to reorder it. You can also reorder with the mouse, simply by clicking on an app and pulling it vertically to the place you want to put it. I don’t think there is a way to do that with the keyboard though. That’s it for now More gestures will follow in a next installment! Have fun with Windows 8   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Teminal hands on load, can't enter anything until CTRL+C

    - by Silver Light
    Hello! I have an issue with terminal in Ubuntu 10.04. When I launch it, it hangs, like this: I cannot do anything until I press CTRL+C: I cannot remember when this started. What can be wrong? Looks like teminal is loading or processing something each time it loads. How can I diagnose and solve this problem? EDIT: Here are the conents of ~/.bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt #force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi # Source .profile if [ -f ~/.profile ]; then . ~/.profile fi Setting -x at the beginning showed me that it tries to repeat this without stopping: +++++++++++++++++++ '[' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' '!=' 'complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' ']' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line='complete -f -X '\''!*.@(pdf|PDF)'\'' acroread gpdf xpdf' +++++++++++++++++++ line=' acroread gpdf xpdf' +++++++++++++++++++ list=("${list[@]}" $line) +++++++++++++++++++ read line

    Read the article

  • Questions to ask to ensure someone understands programming? (and iOS)

    - by Stephen J
    So, I've been tutoring my friend for 2 years. Most people learn programming on their own in 3-6 months, (sans algorithms). It's confusing 'cause he'll run anywhere I tell him to, understands how to read C and C++ honestly better than the average college student, and he'll modify and repeat anything I do... but for the love of god he doesn't move on to new things and he still has test anxiety. I've recently realized he's copied and toyed with existing, but not once gained an understanding of why. I was under the impression he was learning fast because he could write it, but when you say "Make a function that takes an NSString" and he says "How?" and I say "The same way you make ANY function that takes any parameter, NSString is just a type like int" and all I hear is "No, it's an NSString, it's a special thing." and we get into an arguing match 'cause I'm like "It's just a class like any other class, you've used them for months now" and blah... I've subconsciously avoided comprehension questions because of this. Anyway, if you have him copy a program and say "Just initialize it" "Where?" "I don't care, didLoad or initWithCoder or Awake from nib, anywhere it gets initialized" and "No, it has to be exactly where you had it!" "No it doesn't!" I'm sick of this, but he won't give up. So I'm done avoiding these yelling matches and becoming a sadist from now on. I would like some help in finding questions to ask him that force him to understand what he's doing. I'd like some help and any resources I can find. CQuestions looked like a good site, but now I need some iPhone stuff. For example: *What do properties do? How are they changed? How do you change the name of the getter? *Why are Booleans inefficent? What advantage does int have over a boolean and how does the bit-shift operator help? *What does Copy do to a string? *What's the difference between a view controller and a uiview? *Write a program from memory that displays blah on screen, and flashes each view one by one. From beginner up to intermediate, hobbyist with some algebra at most. I'm just looking for resources to work with. I left in backstory so you know to "twist" the questions so he doesn't know he's supposed to init a variable here or there, but has to figure it out, and learn why it goes "here" or that "anywhere is fine as long as it's". Sample programs, anything. I'm relatively open about this because, being a programmer, I seriously doubt he's the only one who has this issue. I'd like to know how others have overcome similar. What made things "click"? for you? Did you have a hard time finding answers on Google, and how did you learn a better way to find what you were looking for? (He's so exact, he'll search for how to write a checkers program with color X and Y inside a uiview, as his search string, instead of breaking it up into components, I need help with that too, and believe it is related). This type of problem has to remind one of us of someone they know. So, Exercises to force them to think? Ways we overcame this thing in the past? I greatly appreciate any help.

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • Dont Throw Duplicate Exceptions

    In your code, youll sometimes have write code that validates input using a variety of checks.  Assuming you havent embraced AOP and done everything with attributes, its likely that your defensive coding is going to look something like this: public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new InvalidArgumentException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } Do you see a problem here?  Heres the deal Exceptions should be meaningful.  They have value at a number of levels: In the code, throwing an exception lets the develop know that there is an unsupported condition here In calling code, different types of exceptions may be handled differently At runtime, logging of exceptions provides a valuable diagnostic tool Its this last reason I want to focus on.  If you find yourself literally throwing the exact exception in more than one location within a given method, stop.  The stack trace for such an exception is likely going to be identical regardless of which path of execution led to the exception being thrown.  When that happens, you or whomever is debugging the problem will have to guess which exception was thrown.  Guessing is a great way to introduce additional problems and/or greatly increase the amount of time require to properly diagnose and correct any bugs related to this behavior. Dont Guess Be Specific When throwing an exception from multiple code paths within the code, be specific.  Virtually ever exception allows a custom message use it and ensure each case is unique.  If the exception might be handled differently by the caller, than consider implementing a new custom exception type.  Also, dont automatically think that you can improve the code by collapsing the if-then logic into a single call with short-circuiting (e.g. if(x == null || !x.IsValid()) ) that will guarantee that you cant easily throw different information into the message as easily as constructing the exception separately in each case. The code above might be refactored like so:   public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new ArgumentNullException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } In this case its taking advantage of the fact that there is already an ArgumentNullException in the framework, but if you didnt have an IsValid() method and were doing validation on your own, it might look like this: public void Foo(SomeClass someArgument) { if(someArgument.Quantity < 0) { throw new InvalidArgumentException("someArgument", "Quantity cannot be less than 0. Quantity: " + someArgument.Quantity); } if(someArgument.Quantity > 100) { throw new InvalidArgumentException("someArgument", "SomeArgument.Quantity cannot exceed 100. Quantity: " + someArgument.Quantity); }   // Do Real Work }   Note that in this last example, Im throwing the same exception type in each case, but with different Message values.  Im also making sure to include the value that resulted in the exception, as this can be extremely useful for debugging.  (How many times have you wished NullReferenceException would tell you the name of the variable it was trying to reference?) Dont add work to those who will follow after you to maintain your application (especially since its likely to be you).  Be specific with your exception messages follow DRY when throwing exceptions within a given method by throwing unique exceptions for each interesting case of invalid state. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • PowerShell and SMO – be careful how you iterate

    - by Fatherjack
    I’ve yet to have a totally smooth experience with PowerShell and it was late on Friday when I crashed into this problem. I haven’t investigated if this is a generally well understood circumstance and if it is then I apologise for repeating everything. Scenario: I wanted to scan a number of server for many properties, including existing logins and to identify which accounts are bestowed with sysadmin privileges. A great task to pass to PowerShell, so with a heavy heart I started up PowerShellISE and started typing. The script doesn’t come easily to me but I follow the logic of SMO and the properties and methods available with the language so it seemed something I should be able to master. Version #1 of my script. And the results it returns when executed against my home laptop server. These results looked good and for a long time I was concerned with other parts of the script, for all intents and purposes quite happy that this was an accurate assessment of the server. Let’s just review my logic for each step of the code at the top. Lines 1 to 7 just set up our variables and write out the header message Line 8 our first loop, to go through each login on the server Line 10 an inner loop that will assess each role name that each login has been assigned Line 11 a test to see if each role has the name ‘sysadmin’ Line 13 write out the login name with a bright format as it is a sysadmin login Line 17 write out the login name with no formatting It is quite possible that here someone with more PowerShell experience than me will be shouting at their screen pointing at the error I made but to me this made total sense. Until I altered the code, I altered lines 6 and 7 of code above to be: $c = $Svr.Logins.Count write-host “There are $c Logins on the server” This changed my output to look like this: This started alarm bells ringing – there are clearly not 13 logins listed So, let’s see where things are going wrong, edit the script so it looks like this. I’ve highlighted the changes to make Running this code shows me these results Our $n variable should count up by one for each login returned and We are clearly missing some logins. I referenced this list back to Management Studio for my server and see the Logins as below, where there are clearly 13 logins. We see a Login called Annette in SSMS but not in the script results so I opened that up and looked at its properties and it’s server roles in particular. The account has only public access to the server. Inspection of the other logins that the PowerShell script misses out show they too are only members of the public role. Right now I can’t work out whether there is a good reason for this and if it should be expected behaviour or not. Please spend a few minutes to leave a comment if you have an opinion or theory for this. How to get the full list of logins. Clearly I needed to get a full list of the logins so set about reviewing my code to see if there was a better way to iterate through the roles for each login. This is the code that I came up with and I think it is doing everything that I need it to. It gives me the expected results like this: So it seems that the ListMembers() method is the trouble maker in my first versions of the code. I would have expected that ListMembers should return Logins that are only members of the public role, certainly Technet makes no reference to it being left out in it’s Login.ListMembers details. Suffice to say, it’s a lesson learned and I will approach using it with caution in future circumstances.

    Read the article

  • Drawing random smooth lines contained in a square [migrated]

    - by Doug Mercer
    I'm trying to write a matlab function that creates random, smooth trajectories in a square of finite side length. Here is my current attempt at such a procedure: function [] = drawroutes( SideLength, v, t) %DRAWROUTES Summary of this function goes here % Detailed explanation goes here %Some parameters intended to help help keep the particles in the box RandAccel=.01; ConservAccel=0; speedlimit=.1; G=10^(-8); % %Initialize Matrices Ax=zeros(v,10*t); Ay=Ax; vx=Ax; vy=Ax; x=Ax; y=Ax; sx=zeros(v,1); sy=zeros(v,1); % %Define initial position in square x(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); y(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); % for i=2:10*t %Measure minimum particle distance component wise from boundary %for each vehicle BorderGravX=[abs(SideLength*ones(v,1)-x(:,i-1)),abs(x(:,i-1))]'; BorderGravY=[abs(SideLength*ones(v,1)-y(:,i-1)),abs(y(:,i-1))]'; rx=min(BorderGravX)'; ry=min(BorderGravY)'; % %Set the sign of the repulsive force for k=1:v if x(k,i)<.5*SideLength sx(k)=1; else sx(k)=-1; end if y(k,i)<.5*SideLength sy(k)=1; else sy(k)=-1; end end % %Calculate Acceleration w/ random "nudge" and repulive force Ax(:,i)=ConservAccel*Ax(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sx*G./rx.^2; Ay(:,i)=ConservAccel*Ay(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sy*G./ry.^2; % %Ad hoc method of trying to slow down particles from jumping outside of %feasible region for h=1:v if abs(vx(h,i-1)+Ax(h,i))<speedlimit vx(h,i)=vx(h,i-1)+Ax(h,i); elseif (vx(h,i-1)+Ax(h,i))<-speedlimit vx(h,i)=-speedlimit; else vx(h,i)=speedlimit; end end for h=1:v if abs(vy(h,i-1)+Ay(h,i))<speedlimit vy(h,i)=vy(h,i-1)+Ay(h,i); elseif (vy(h,i-1)+Ay(h,i))<-speedlimit vy(h,i)=-speedlimit; else vy(h,i)=speedlimit; end end % %Update position x(:,i)=x(:,i-1)+(vx(:,i-1)+vx(:,i))/2; y(:,i)=y(:,i-1)+(vy(:,i-1)+vy(:,1))/2; % end %Plot position clf; hold on; axis([-100,SideLength+100,-100,SideLength+100]); cc=hsv(v); for j=1:v plot(x(j,1),y(j,1),'ko') plot(x(j,:),y(j,:),'color',cc(j,:)) end hold off; % end My original plan was to place particles within a square, and move them around by allowing their acceleration in the x and y direction to be governed by a uniformly distributed random variable. To keep the particles within the square, I tried to create a repulsive force that would push the particles away from the boundaries of the square. In practice, the particles tend to leave the desired "feasible" region after a relatively small number of time steps (say, 1000)." I'd love to hear your suggestions on either modifying my existing code or considering the problem from another perspective. When reading the code, please don't feel the need to get hung up on any of the ad hoc parameters at the very beginning of the script. They seem to help, but I don't believe any beside the "G" constant should truly be necessary to make this system work. Here is an example of the current output: Many of the vehicles have found their way outside of the desired square region, [0,400] X [0,400].

    Read the article

  • Kinect losing tracked players with Beta2 SDK

    - by Eric B
    So i'm creating a game using the Beta2 SDK for Kinect. The issue i am having is that in the middle of gameplay if another person enters the Kinects FOV it stops tracking the player and will not track anyone else for several minutes. Same deal if the player leaves the FOV and reenters it. Here is what im using to detect players. void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { int playersAlive = 0; // reset lists skeletons = new Dictionary<int, SkeletonData>(); //create a new list for skeletons menuSkeleton = new List<SkeletonData>(); initialPlayers = new Dictionary<float, SkeletonData>(); //create a new list for initialPlayers foreach (SkeletonData s in e.SkeletonFrame.Skeletons) //for each skeleton the kinect has detected { if (s.TrackingState == SkeletonTrackingState.Tracked) // players found { menuSkeleton.Add(s); if (initialized) // after initialization { skeletons.Add(s.TrackingID, s); } else // before initialization initialPlayers.Add(s.Joints[JointID.ShoulderCenter].Position.X, s); //if we are not initialized then add this player to the inital player list. playersAlive++; } } if (playersAlive == TOTAL_PLAYERS_ALLOWED) // If there is one player { if (!inMiniGame) // Before the game starts gameStart = DateTime.Now; // Reset initialization timer if (!initialized) // Before initialization // NOTE TO SELF I TOOK OUT && inMenu { InitializePlayers(); if (DateTime.Now.Subtract(gameStart).TotalMilliseconds > INITIALIZATION_WAIT_TIME) { initialized = true; // initialize timers from fixed starting time if (inMiniGame) //if the game has started { gamePause = gameStart; //TODO ERIC: Initialize any Timers Here } } } } } /// <summary> /// this function initializes the players adding them to a list /// and making one of the players the menu controller, for LIM we will need to change the code so that the /// game only recognizes and supports one player at a time /// variable names will need to be change as well. /// </summary> private void InitializePlayers() { List<float> initialPos = new List<float>(); // used to track starting positions players = new Dictionary<int, Player>(); foreach (float pos in initialPlayers.Keys) { initialPos.Add(pos); //add position of each inital player to list } float first = initialPos[0]; // left player first, right second Player player = new Player(initialPlayers[first].TrackingID, true); player.PlayerNumber = PLAYER_ONE; player.Skeleton = initialPlayers[first]; player.Specifics = new PlayerSpecifics(player.PlayerNumber); player.Specifics.PauseTimer = gameStart; players.Add(initialPlayers[first].TrackingID, player); menuController = initialPlayers[first].TrackingID; //menu controller is player 1 } This is a one player game. Also when the game starts Initialize is set to false, and gets set to true when i go from the games menu into the gameplay. So can anyone see any issues with this code block that would cause the kinect to lose players as they enter/exit the FOV? and not re-track them? Thank you for any help.

    Read the article

  • Single use download script - Modification [on hold]

    - by Iulius
    I have this Single use download script! This contain 3 php files: page.php , generate.php and variables.php. Page.php Code: <?php include("variables.php"); $key = trim($_SERVER['QUERY_STRING']); $keys = file('keys/keys'); $match = false; foreach($keys as &$one) { if(rtrim($one)==$key) { $match = true; $one = ''; } } file_put_contents('keys/keys',$keys); if($match !== false) { $contenttype = CONTENT_TYPE; $filename = SUGGESTED_FILENAME; readfile(PROTECTED_DOWNLOAD); exit; } else { ?> <html> <head> <meta http-equiv="refresh" content="1; url=http://docs.google.com/"> <title>Loading, please wait ...</title> </head> <body> Loading, please wait ... </body> </html> <?php } ?> Generate.php Code: <?php include("variables.php"); $password = trim($_SERVER['QUERY_STRING']); if($password == ADMIN_PASSWORD) { $new = uniqid('key',TRUE); if(!is_dir('keys')) { mkdir('keys'); $file = fopen('keys/.htaccess','w'); fwrite($file,"Order allow,deny\nDeny from all"); fclose($file); } $file = fopen('keys/keys','a'); fwrite($file,"{$new}\n"); fclose($file); ?> <html> <head> <title>Page created</title> <style> nl { font-family: monospace } </style> </head> <body> <h1>Page key created</h1> Your new single-use page link:<br> <nl> <?php echo "http://" . $_SERVER['HTTP_HOST'] . DOWNLOAD_PATH . "?" . $new; ?></nl> </body> </html> <?php } else { header("HTTP/1.0 404 Not Found"); } ?> And the last one Variables.php Code: <? define('PROTECTED_DOWNLOAD','download.php'); define('DOWNLOAD_PATH','/.work/page.php'); define('SUGGESTED_FILENAME','download-doc.php'); define('ADMIN_PASSWORD','1234'); define('EXPIRATION_DATE', '+36 hours'); header("Cache-Control: no-cache, must-revalidate"); header("Expires: ".date('U', strtotime(EXPIRATION_DATE))); ?> The http://www.site.com/generate.php?1234 will generate a unique link like page.php?key1234567890. This link page.php?key1234567890 will be sent by email to my user. Now how can I generate a link like this page.php?key1234567890&[email protected] ? So I think I must access the generator page like this generate.php?1234&[email protected] . P.S. This variable will be posted on the download page by "Hello, " I tried everthing to complete this, and no luck. Thanks in advance for help.

    Read the article

  • MVC Communication Pattern

    - by Kedu
    This is kind of a follow up question to this http://stackoverflow.com/questions/23743285/model-view-controller-and-callbacks, but I wanted to post it separately, because its kind of a different topic. I'm working on a multiplayer cardgame for the Android platform. I split the project into MVC which fits the needs pretty good, but I'm currently stuck because I can't figure out a good way to communicate between the different parts. I have everything setup and working with the controller being a big state machine, which is called over and over from the gameloop, and calls getter methods from the GUI and the android/network part to get the input. The input itself in the GUI and network is set by inputlisteners that set a local variable which I read in the getter method. So far so good, this is working. But my problem is, the controller has to check every input separately,so if I want to add an input I have to check in which states its valid and call the getter method from all these states. This is not good, and lets the code look pretty ugly, makes additions uncomfortable and adds redundance. So what I've got from the question I mentioned above is that some kind of command or event pattern will fit my needs. What I want to do is to create a shared and threadsafe queue in the controller and instead of calling all these getter methods, I just check the queue for new input and proceed it. On the other side, the GUI and network don't have all these getters, but instead create an event or command and send it to the controller through, for example, observer/observable. Now my problem: I can't figure out a way, for these commands/events to fit a common interface (which the queue can store) and still transport different kind of data (button clicks, cards that are played, the player id the command comes from, synchronization data etc.). If I design the communication as command pattern, I have to stick all the information that is needed to execute the command into it when its created, that's impossible because the GUI or network has no knowledge of all the things the controller needs to execute stuff that needs to be done when for example a card is played. I thought about getting this stuff into the command when executing it. But over all the different commands I have, I would need all the information the controller has, and thus give the command a reference to the controller which would make everything in it public, which is real bad design I guess. So, I could try some kind of event pattern. I have to transport data in the event. So, like the command, I would have an interface, which all events have in common, and can be stored in the shared queue. I could create a big enum with all the different events that a are possible, save one of these enums in the actual event, and build a big switch case for the events, to proceed different stuff for different events. The problem here: I have different data for all the events. But I need a common interface, to store the events in a queue. How do I get the specific data, if I can only access the event through the interface? Even if that wouldn't be a problem, I'm creating another big switch case, which looks ugly, and when i want to add a new event, I have to create the event itself, the case, the enum, and the method that's called with the data. I could of course check the event with the enum and cast it to its type, so I can call event type specific methods that give me the data I need, but that looks like bad design too.

    Read the article

  • Shrinking a Linux OEL 6 virtual Box image (vdi) hosted on Windows 7

    - by AndyBaker
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Recently for a customer demonstration there was a requirement to build a virtual box image with Oracle Enterprise Manager Cloud Control 12c. This meant installing OEL Linux 6 as well as creating an 11gr2 database and Oracle Enterprise Manager Cloud Control 12c on a single virtual box. Storage was sized at 300Gb using dynamically allocated storage for the virtual box and about 10Gb was used for Linux and the initial build. After copying over all the binaries and performing all the installations the virtual box became in the region of 80Gb used size on the host operating system, however internally it only really needed around 20Gb. This meant 60Gb had been used when copying over all the binaries and although now free was not returned to the host operating system due to the growth of the virtual box storage '.vdi' file.  Once the ‘vdi’ storage had grown it is not shrunk automatically afterwards. Space is always tight on the laptop so it was desirable to shrink the virtual box back to a minimal size and here is the process that was followed. Install 'zerofree' Linux package into the OEL6 virtual box The RPM was downloaded and installed from a site similar to below; http://rpm.pbone.net/index.php3/stat/4/idpl/12548724/com/zerofree-1.0.1-5.el5.i386.rpm.html A simple internet search for ’zerofree Linux rpm’ was easy to perform and find the required rpm. Execute 'zerofree' package on the desired Linux file system To execute this package the desired file system needs to be mounted read only. The following steps outline this process. As root: # umount /u01 As root:# mount –o ro –t ext4 /u01 NOTE: The –o is options and the –t is the file system type found in the /etc/fstab. Next run zerofree against the required storage, this is located by a simple ‘df –h’ command to see the device associated with the mount. As root:# zerofree –v /dev/sda11   NOTE: This takes a while to run but the ‘-v’ option gives feedback on the process. What does Zerofree do? Zerofree’s purpose is to go through the file system and zero out any unused sectors on the volume so that the later stages can shrink the virtual box storage obtaining the free space back. When zerofree has completed the virtual box can be shutdown as the last stage is performed on the physical host where the virtual box vdi files are located. Compact the virtual box ‘.vdi’ files The final stage is to get virtual box to shrink back the storage that has been correctly flagged as free space after executing zerofree. On the physical host in this case a windows 7 laptop a DOS window was opened. At the prompt the first step is to put the virtual box binaries onto the PATH. C:\ >echo %PATH%   The above shows the current value of the PATH environment variable. C:\ >set PATH=%PATH%;c:\program files\Oracle\Virtual Box;   The above adds onto the existing path the virtual box binary location. C:\>cd c:\Users\xxxx\OEL6.1   The above changes directory to where the VDI files are located for the required virtual box machine. C:\Users\xxxxx\OEL6.1>VBoxManage.exe modifyhd zzzzzz.vdi compact  NOTE: The zzzzzz.vdi is the name of the required vdi file to shrink. Finally the above command is executed to perform the compact operation on the ‘.vdi’ file(s). This also takes a long time to complete but shrinks the VDI file back to a minimum size. In the case of the demonstration virtual box OEM12c this reduced the virtual box to 20Gb from 80Gb which was a great outcome to achieve.

    Read the article

  • Broken Views

    - by Ajarn Mark Caldwell
    “SELECT *” isn’t just hazardous to performance, it can actually return blatantly wrong information. There are a number of blog posts and articles out there that actively discourage the use of the SELECT * FROM …syntax.  The two most common explanations that I have seen are: Performance:  The SELECT * syntax will return every column in the table, but frequently you really only need a few of the columns, and so by using SELECT * your are retrieving large volumes of data that you don’t need, but the system has to process, marshal across tiers, and so on.  It would be much more efficient to only select the specific columns that you need. Future-proof:  If you are taking other shortcuts in your code, along with using SELECT *, you are setting yourself up for trouble down the road when enhancements are made to the system.  For example, if you use SELECT * to return results from a table into a DataTable in .NET, and then reference columns positionally (e.g. myDataRow[5]) you could end up with bad data if someone happens to add a column into position 3 and skewing all the remaining columns’ ordinal position.  Or if you use INSERT…SELECT * then you will likely run into errors when a new column is added to the source table in any position. And if you use SELECT * in the definition of a view, you will run into a variation of the future-proof problem mentioned above.  One of the guys on my team, Mike Byther, ran across this in a project we were doing, but fortunately he caught it while we were still in development.  I asked him to put together a test to prove that this was related to the use of SELECT * and not some other anomaly.  I’ll walk you through the test script so you can see for yourself what happens. We are going to create a table and two views that are based on that table, one of them uses SELECT * and the other explicitly lists the column names.  The script to create these objects is listed below. IF OBJECT_ID('testtab') IS NOT NULL DROP TABLE testtabgoIF OBJECT_ID('testtab_vw') IS NOT NULL DROP VIEW testtab_vwgo IF OBJECT_ID('testtab_vw_named') IS NOT NULL DROP VIEW testtab_vw_namedgo CREATE TABLE testtab (col1 NVARCHAR(5) null, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col2)VALUES ('A','B'), ('A','B')GOCREATE VIEW testtab_vw AS SELECT * FROM testtabGOCREATE VIEW testtab_vw_named AS SELECT col1, col2 FROM testtabgo Now, to prove that the two views currently return equivalent results, select from them. SELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named OK, so far, so good.  Now, what happens if someone makes a change to the definition of the underlying table, and that change results in a new column being inserted between the two existing columns?  (Side note, I normally prefer to append new columns to the end of the table definition, but some people like to keep their columns alphabetized, and for clarity for later people reviewing the schema, it may make sense to group certain columns together.  Whatever the reason, it sometimes happens, and you need to protect yourself and your code from the repercussions.) DROP TABLE testtabgoCREATE TABLE testtab (col1 NVARCHAR(5) null, col3 NVARCHAR(5) NULL, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col3, col2)VALUES ('A','C','B'), ('A','C','B')goSELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named I would have expected that the view using SELECT * in its definition would essentially pass-through the column name and still retrieve the correct data, but that is not what happens.  When you run our two select statements again, you see that the View that is based on SELECT * actually retrieves the data based on the ordinal position of the columns at the time that the view was created.  Sure, one work-around is to recreate the View, but you can’t really count on other developers to know the dependencies you have built-in, and they won’t necessarily recreate the view when they refactor the table. I am sure that there are reasons and justifications for why Views behave this way, but I find it particularly disturbing that you can have code asking for col2, but actually be receiving data from col3.  By the way, for the record, this entire scenario and accompanying test script apply to SQL Server 2008 R2 with Service Pack 1. So, let the developer beware…know what assumptions are in effect around your code, and keep on discouraging people from using SELECT * syntax in anything but the simplest of ad-hoc queries. And of course, let’s clean up after ourselves.  To eliminate the database objects created during this test, run the following commands. DROP TABLE testtabDROP VIEW testtab_vwDROP VIEW testtab_vw_named

    Read the article

  • A .NET Developers day with the iPad.

    - by mbcrump
    The Apple iPad is currently getting a lot of buzz because of the app store, the book store and of course iTunes. I had the chance to play with one and this is what I have learned about the device. Let’s get this out of the way first, the iPad is awesome. It is the device for media consumption and casual web browsing. But how does it measure up to those of us with .NET on our brains all days. Let’s find out… Main Screen – you can customize everything on this page. I guess I should replace that image with a C# or VS logo. Its pretty standard stuff if you have an iPhone.   Programming Books If you have a subscription to Safari Books Online, then you are in luck, its very easy to read the books on the iPad. Just fire up Safari web browser and goto the Safari Books Online. The biggest benefit that I can see with the iPad is the ability to read books wherever and not have to worry about purchasing books that I already have the .PDF for. Below is a sample from Code Complete 2nd Edition. Below is a PDF of the ECMA-334 C# Language Specification. As you can see its very readable and you should have no problem reading actual code.   Example of Code shown below: It is however easier to read the PDF and store them with a 3rd party PDF reader. I have seen several for .99 cents or less. You can however switch the screen to vertical to get more viewing space as shown below: I was disappointed with the iBooks application. I could not find a single .NET programming book anywhere. I was able to download the excellent sci-fi book “A memory of Wind” for free though. If I just overlooked them, then please email me with the names and titles. I couldn’t even find a technology category in the categories list. Web Surfing – Technical Sites Below is an example of my site in Safari. The code is very readable and the experience was identical to viewing it in Firefox. I tried multiple programming site and the pages looked great except those that used flash and of course it did not display on those pages.   News Apps - Technical Content The standard NY Times and USA Today looked great, but the Technical Content was lacking. It would probably be better to use Google Reader for online technical news.     YouTube Videos – Technical Content  Since its YouTube, we already know that a lot of technical content exist and it plays great on the iPad. I watched several programming videos and could clearly see the code being written. Taking Technical Notes The iPad comes with a great notepad for taking notes. I found that it was easy to take notes regarding projects that I am currently working on.   Calendar The calendar that ships with the iPad is great for organizing. You can setup exchange server or manually enter the information. Pretty standard stuff.    Random Applications that I like: TweetDeck.   and Adobe Ideas. Adobe Ideas is kinda like SketchFlow except you use your finger to mock up the sketches.  Don’t forget that the iPad is great for any type of podcasting. That pretty much sums it up, I would definitely recommend this device as it will only get better. I believe the iOS4 comes out on the 24th and the iPad will only get more and more apps. You could save a few bucks by waiting for the 2nd generation, but that’s a call that only you can make.

    Read the article

  • Searching for context in Silverlight applications

    - by PeterTweed
    A common behavior in business applications that have developed through the ages is for a user to be able to get information or execute commands in relation to some information/function displayed by right clicking the object in question and popping up a context menu that offers relevant options to choose. The Silverlight Toolkit April 2010 release introduced the context menu object.  This can be added to other UI objects and display options for the user to choose.  The menu items can be enabled or disabled as per your application logic and icons can be added to the menu items to add visual effect.  This post will walk you through how to use the context menu object from the Silverlight Toolkit. Steps: 1. Create a new Silverlight 4 application 2. Copy the following namespace definition to the user control object of the MainPage.xaml file: xmlns:my="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Input.Toolkit"   3. Copy the following XAML into the LayoutRoot grid in MainPage.xaml:          <Border CornerRadius="15" Background="Blue" Width="400" Height="100">             <TextBlock Foreground="White" FontSize="20" Text="Context Menu In This Border...." HorizontalAlignment="Center" VerticalAlignment="Center" >             </TextBlock>             <my:ContextMenuService.ContextMenu>                 <my:ContextMenu >                     <my:MenuItem                 Header="Copy"                 Click="CopyMenuItem_Click" Name="copyMenuItem">                         <my:MenuItem.Icon>                             <Image Source="copy-icon-small.png"/>                         </my:MenuItem.Icon>                     </my:MenuItem>                     <my:Separator/>                     <my:MenuItem Name="pasteMenuItem"                 Header="Paste"                 Click="PasteMenuItem_Click">                         <my:MenuItem.Icon>                             <Image Source="paste-icon-small.png"/>                         </my:MenuItem.Icon>                     </my:MenuItem>                 </my:ContextMenu>             </my:ContextMenuService.ContextMenu>         </Border>   The above code associates a context menu with two menu items and a separator between them to the border object.  The menu items has icons associated with them to add visual appeal.  The menu items have click event handlers that will be added in the MainPage.xaml.cs code behind in a later step. 4. Add two icon sized images to the ClientBin directory of the web project hosting the Silverlight application, named copy-icon-small.png and paste-icon-small.jpg respectively.  I used copy and paste icons as the names suggest. 5. Add the following code to the class in MainPage.xaml.cs file:         private void CopyMenuItem_Click(object sender, RoutedEventArgs e)         {             MessageBox.Show("Copy selected");         }           private void PasteMenuItem_Click(object sender, RoutedEventArgs e)         {             MessageBox.Show("Paste selected");         }   This code adds the event handlers for the menu items defined in step 3. 6. Run the application, right click on the border and select a menu option and see the appropriate message box displayed. Congratulations it’s that easy!   Take the Slalom Challenge at www.slalomchallenge.com!

    Read the article

  • C# Dev Challenge Part 1 of n &ndash; Beginner Edition

    - by mbcrump
    I developed this challenge to test one’s knowledge of C Sharp. I am planning on creating several challenges with different skill sets, so don’t get mad if this challenge doesn’t well challenge you... I noticed that most people like short quizzes so this one only contains 5 questions. All of the challenges are clear and concise of what I am asking you to do. No smoke and mirrors here, meaning that none of the code has syntax errors. The purpose of this exercise is to test several OOP concepts and see how much of the C# language you really know. Question #1 – Lets start off Easy… Will the following code snippet compile successfully? What does this question test? - Can this compile without a namespace? Do you have to have an entry point of “static void Main()”? class Test { static int Main() { System.Console.WriteLine("Developer Challenge"); return 0; } } Answer (select text in box below): Yes, it will compile successfully. Question #2 – What is the value of the Console.WriteLine statements? What does this question test? – Do I understand reference types/value types? If a variable is declared with the @ symbol and its not a reserved keyword does the application compile successfully? using System; internal struct MyStruct { public int Value; } internal class MyClass { public int Value; } class Test { static void Main() { MyStruct @struct1 = new MyStruct(); MyStruct @struct2 = @struct1; @struct2.Value = 100; MyClass @ref1 = new MyClass(); MyClass @ref2 = @ref1; @ref2.Value = 100; Console.WriteLine("Value Type: {0} {1}", @struct1.Value, @struct2.Value); Console.WriteLine("Reference Type: {0} {1}", @ref1.Value, @ref2.Value); } } Answer (select text in box below): Value Type: 0 100 Reference Type: 100 100 Question #3 – What is the value of the Console.WriteLine statements? What does this question test? – Can 2 objects reference the same point in memory? using System; class Test { static void Main() { string s1 = "Testing2"; string t1 = s1; Console.WriteLine(s1 == t1); Console.WriteLine((object)s1 == (object)t1); } } Answer (select text in box below): True True Question #4 – What is the value of the Console.WriteLine statements? What does this question test? – How does the “Stack” work – LIFO or FIFO?   using System; using System.Collections; class Test { static void Main() { Stack a = new Stack(5); a.Push("1"); a.Push("2"); a.Push("3"); a.Push("4"); a.Push("5"); foreach (var o in a) { Console.WriteLine(o); } } } Answer (select text in box below): 5 4 3 2 1 Question #5 – What is the value of the Console.WriteLine statements? What does this question test? – Array and General Looping Knowledge. using System; namespace ConsoleApplication5 { class Program { static void Main(string[] args) { int[] J_LIST = new int[5] { 1, 2, 3, 4, 5 }; int K = 10; int L = 5; foreach (var J in J_LIST) { K = K - J; L = K + 2 * J; Console.WriteLine("J = {0, 5} K = {1, 5} L = {2, 5}", J, K, L); } Console.ReadLine(); } } } Answer (select text in box below): J = 1 K = 9 L = 11 J = 2 K = 7 L = 11 J = 3 K = 4 L = 10 J = 4 K = 0 L = 8 J = 5 K = -5 L = 5 Stay Tuned for more challenges!

    Read the article

  • Using NSpec at various architectural layers

    - by nono
    Having read the quick start at nspec.org, I realized that NSpec might be a useful tool in a scenario which was becoming a bit cumbersome with NUnit alone. I'm adding an OAuth (or, DotNetOpenAuth) to a website and quickly made a mess of writing test methods such as [Test] public void UserIsLoggedInLocallyPriorToInvokingExternalLoginAndExternalLoginSucceedsAndExternalProviderIdIsNotAlreadyAssociatedWithUserAccount() { ... } ... and I wound up with maybe a dozen permutations of this theme, for the user already being logged in locally and not locally, the external login succeeding or failing, etc. Not only were the method names unwieldy, but every test needed a setup that contained parts in common with a different set of other tests. I realized that NSpec's incremental setup capabilities would work great for this, and for a while I was trucking a long wonderfully, with code like act = () => { actionResult = controller.ExternalLoginCallback(returnUrl); }; context["The user is already logged in"] = () => { before = () => identity.Setup(x => x.IsAuthenticated).Returns(true); context["The external login succeeds"] = () => { before = () => oauth.Setup(x => x.VerifyAuthentication(It.IsAny<string>())).Returns(new AuthenticationResult(true, providerName, "provideruserid", "username", new Dictionary<string, string>())); context["External login already exists for current user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Should add 'login sucessful' alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("Login successful"); alerts[0].AlertType.should_be(AlertType.Success); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; context["External login already exists for another user"] = () => { before = () => authService.Setup(x => x.ExternalLoginExistsForAnyOtherUser(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<string>())).Returns(true); it["Adds an error alert"] = () => { var alerts = (IList<Alert>)controller.TempData[TempDataKeys.AlertCollection]; alerts[0].Message.should_be_same("The external login you requested is already associated with a different user account"); alerts[0].AlertType.should_be(AlertType.Error); }; it["Should return a redirect result"] = () => actionResult.should_cast_to<RedirectToRouteResult>(); }; This approach seemed to work magnificently until I prepared to write test code for my ApplicationServices layer, to which I delegate viewmodel manipulation from my MVC controllers, and which coordinates the operations of the lower data repository layer: public void CreateUserAccountFromExternalLogin(RegisterExternalLoginModel model) { throw new NotImplementedException(); } public void AssociateExternalLoginWithUser(string userName, string provider, string providerUserId) { throw new NotImplementedException(); } public string GetLocalUserName(string provider, string providerUserId) { throw new NotImplementedException(); } I have no idea what in the world to name the test class, the test methods, or even if I should perhaps include the testing for this layer into the test class from my large code snippet above, so that a single feature or user action could be tested without regard to architectural layering. I can't find any tutorials or blog posts which cover more than simple examples, so I would appreciate any recommendations or pointing in the right direction. I would even welcome "your question is invalid"-type answers as long as some explanation is provided.

    Read the article

  • Criminals and Other Illegal Characters

    - by Most Valuable Yak (Rob Volk)
    SQLTeam's favorite Slovenian blogger Mladen (b | t) had an interesting question on Twitter: http://www.twitter.com/MladenPrajdic/status/347057950470307841 I liked Kendal Van Dyke's (b | t) reply: http://twitter.com/SQLDBA/status/347058908801667072 And he was right!  This is one of those pretty-useless-but-sounds-interesting propositions that I've based all my presentations on, and most of my blog posts. If you read all the replies you'll see a lot of good suggestions.  I particularly like Aaron Bertrand's (b | t) idea of going into the Unicode character set, since there are over 65,000 characters available.  But how to find an illegal character?  Detective work? I'm working on the premise that if SQL Server will reject it as a name it would throw an error.  So all we have to do is generate all Unicode characters, rename a database with that character, and catch any errors. It turns out that dynamic SQL can lend a hand here: IF DB_ID(N'a') IS NULL CREATE DATABASE [a]; DECLARE @c INT=1, @sql NVARCHAR(MAX)=N'', @err NVARCHAR(MAX)=N''; WHILE @c<65536 BEGIN BEGIN TRY SET @sql=N'alter database ' + QUOTENAME(CASE WHEN @c=1 THEN N'a' ELSE NCHAR(@c-1) END) + N' modify name=' + QUOTENAME(NCHAR(@c)); RAISERROR(N'*** Trying %d',10,1,@c) WITH NOWAIT; EXEC(@sql); SET @c+=1; END TRY BEGIN CATCH SET @err=ERROR_MESSAGE(); RAISERROR(N'Ooops - %d - %s',10,1,@c,@err) WITH NOWAIT; BREAK; END CATCH END SET @sql=N'alter database ' + QUOTENAME(NCHAR(@c-1)) + N' modify name=[a]'; EXEC(@sql); The script creates a dummy database "a" if it doesn't already exist, and only tests single characters as a database name.  If you have databases with single character names then you shouldn't run this on that server. It takes a few minutes to run, but if you do you'll see that no errors are thrown for any of the characters.  It seems that SQL Server will accept any character, no matter where they're from.  (Well, there's one, but I won't tell you which. Actually there's 2, but one of them requires some deep existential thinking.) The output is also interesting, as quite a few codes do some weird things there.  I'm pretty sure it's due to the font used in SSMS for the messages output window, not all characters are available.  If you run it using the SQLCMD utility, and use the -o switch to output to a file, and -u for Unicode output, you can open the file in Notepad or another text editor and see the whole thing. I'm not sure what character I'd recommend to answer Mladen's question.  I think the standard tab (ASCII 9) is fine.  There's also several specific separator characters in the original ASCII character set (decimal 28-31). But of all the choices available in Unicode whitespace, I think my favorite would be the Mongolian Vowel Separator.  Or maybe the zero-width space. (that'll be fun to print!)  And since this is Mladen we're talking about, here's a good selection of "intriguing" characters he could use.

    Read the article

  • Using 3rd Party JavaScript Plugins Hardwired With &lsquo;document.write&rsquo;

    - by ToStringTheory
    Introduction Have you ever had the need to implement a 3rd party JavaScript plugin, but your needs didn’t fit the model and usage defined by the API or documentation of the plugin?  Recently I ran into this issue when I was trying to implement a web snapshot plugin into our site.  To use their plugin, you had to include a script tag to the plugin on their server with an API key.  The second part of the usage was to include a <script> tag around a function call wherever you wanted a snapshot to appear. The Problem When trying to use the service, the images did not display.  I checked a couple of things and didn’t find anything wrong at first..  It wasn’t until I looked at the function that was called by the inline script did I find the issue – a call to the webservice, followed by a call to ‘document.write’ in its callback.  The solution in which I was trying to implement the plugin happened to be in response to an AJAX call after the document had completely loaded.  After the page has loaded, document.write does nothing. My first thought for a solution was to just cache the script from the service, and edit it do something like a return function or callback that I could use to edit the document from.  However, I quickly discovered that there is no way to cache the script from the service, as it had a hash in the function where it would call the server.  The hash was updated every few seconds/minutes, expiring old hashes.  This meant that I wouldn’t be able to edit the script and upload a new version to my server, as the script would not work after a few minutes from originally getting the script from the service. Solution The solution eluded me until I realized that this was JavaScript I was dealing with.  A language designed so that you could do just about anything to any library, function, or object…  At this point, the solution was simple – take control of the document.write function.  Using a buffer variable, and a simple function call, it is eerily simple to perform: //what would have been output to the document var buffer = ""; //store a reference to the real document.write var dw = document.write; //redefine document.write to store to our buffer document.write = function (str) {buffer += str;} //execute the function containing calls to document.write eval('{function encapsulated in <script></script> tags}'); //restore the original document.write function (just in case) document.write = dw; That’s it.  Instead of using the script tags where I wanted to include a snapshot, I called a function passing in the URL to the page I wanted a snapshot of.  After that last line of code, what would have been output to the document (or not in the case of the ajax call) was instead stored in buffer. Conclusion While the solution itself is simple, coming from a background much more footed in the .Net platform, I believe that this is a prime example of always keeping the language that you are working in in mind.  While this may seem obvious at first, as I KNEW I was in JavaScript, I never thought of taking control of the document.write function because I am more accustomed to the .Net world.  I can’t simply replace the functionality of Console.WriteLine.

    Read the article

< Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >