Search Results

Search found 21592 results on 864 pages for 'custom task'.

Page 177/864 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Architecture or Pattern for handling properties with custom setter/getter?

    - by Shelby115
    Current Situation: I'm doing a simple MVC site for keeping journals as a personal project. My concern is I'm trying to keep the interaction between the pages and the classes simplistic. Where I run into issues is the password field. My setter encrypts the password, so the getter retrieves the encrypted password. public class JournalBook { private IEncryptor _encryptor { get; set; } private String _password { get; set; } public Int32 id { get; set; } public String name { get; set; } public String description { get; set; } public String password { get { return this._password; } set { this.setPassword(this._password, value, value); } } public List<Journal> journals { get; set; } public DateTime created { get; set; } public DateTime lastModified { get; set; } public Boolean passwordProtected { get { return this.password != null && this.password != String.Empty; } } ... } I'm currently using model-binding to submit changes or create new JournalBooks (like below). The problem arises that in the code below book.password is always null, I'm pretty sure this is because of the custom setter. [HttpPost] public ActionResult Create(JournalBook book) { // Create the JournalBook if not null. if (book != null) this.JournalBooks.Add(book); return RedirectToAction("Index"); } Question(s): Should I be handling this not in the property's getter/setter? Is there a pattern or architecture that allows for model-binding or another simple method when properties need to have custom getters/setters to manipulate the data? To summarize, how can I handle the password storing with encryption such that I have the following, Robust architecture I don't store the password as plaintext. Submitting a new or modified JournalBook is as easy as default model-binding (or close to it).

    Read the article

  • Where is my Sharepoint 2010 Custom Timer Job running?

    - by spano
    When building a custom timer job for Sharepoint 2010, special attention should be put on where do we need this job to run. When we have a farm environment, we can choose to run the job on all servers, in one particular server, only the front end servers, etc. Depending on our requirements the timer job implementation and installation approach will change, so we should decide where we want it to run at the first place. All Sharepoint timer jobs ultimately inherit from the SPJobDefinition class. This...(read more)

    Read the article

  • Develop a custom referenced functoid and include in map.

    How to develop a custom referenced functoid and how to include and use it in a map. A referenced functoid is one that is coded using a .Net language to create a class file that is referenced by the functoid at run time.  read moreBy BiZTech KnowDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to get a ?High Quality? Custom Logo Design

    Custom Logo Design, are logos designed specifically, uniquely and creatively for a business. The competition between the different logo design service providers is fierce. There is a flood of service... [Author: Claudia Winifred - Web Design and Development - March 20, 2010]

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Nginx redirect requests to sub-domains that do not exist to custom 404 page when wild card A record is set?

    - by Anagio
    Is there a way to capture all requests to arbitrary sub-domains which do not have a virtual host setup, and redirect to a custom 404 page in nginx? I will have a wild card A record setup *.example.com and all our users will have a sub-domain username.example.com. If someone enters a sub-domain which does not exist how can I redirect to a custom 404 page rather than have it resolve since wild card is setup?

    Read the article

  • Is it possible to use some form of code for example in PUTTY to execute a task which is done Remote

    - by xnxmx
    Basically, Every morning at 6:00AM I have to do login to remote desktop, open a program, and click on a few things to make reservations before anyone else does. I want to know if there is any other way that this can be done by simply turning it into some form of a code and executing it instead of manually doing it. Of course, time is precious here and the task needs to be done at the same pace if not faster. Thanks!!!

    Read the article

  • How to protect custom shapes from being reused? Visio 2010

    - by Chris
    We are building a set of documentation for our business with Visio 2010. We need to make the Visio files accessible to external consultants for review, but we want to ensure that they cannot copy any of our custom shapes or formulas. How can we protect custom shapes/stencils so that they cannot be used outside of our documents? Or, if that's not possible, how can we mark our shapes in such a way that we could prove that they were created by us?

    Read the article

  • Where is the used memory in Task Manager & Resource Monitor coming from?

    - by Sam Adams
    On a Windows 7, the working set memory usage plus private memory does not add up to the total used memory in Task Manager and Windows 7 Resource Monitor. How do you find out where the used memory is coming from? The cached memory can't be part of it because sometimes the total cache is greater than the total in use. The commit memory plus the working set also doesn't add up to the total in use - but even that shouldn't be significant if it did, since commit is virtual.

    Read the article

  • How to add custom location is Save As dialog box?

    - by Ram
    Hi, I want to add a custom folder location in "Save As" and "Save" dialog box. Currently it shows "My Computer" , "Desktop" , "My Documents" and "My Recent Documents" option on the LHS of the dialog box. I want to add a custom location "C:\Test" there. How can I do that?

    Read the article

  • How can I create a cron job that runs a task every three weeks?

    - by itj
    I have a task that needs to be performed on my project schedule (3 weeks). I'm able to set up cron to do this every week, or (for example) on the 3rd week of every month - but can't find a way to do this every three weeks. I could hack the script to create temporary files (or similar) so it could work out it was the third time it has been run - but this solution smells. Can it be done in a clean way?

    Read the article

  • Adding a new target type to msbuild: How do I refer to the itemname in the task rules?

    - by jmucchiello
    I'm trying to add a task to build the COM proxy DLL after building the main DLL. So I created the following in a .target file: <Target Name="ProxyDLL" Inputs="$(IntDir)%(WHATGOESHERE)_i.c;$(IntDir)dlldata.c" Outputs="$(OutDir)%(WHATGOESHERE)ps.dll" AfterTargets="Link"> <CL Sources="$(IntDir)%(WHATGOESHERE)_i.c;$(IntDir)dlldata.c" /> </Target> And reference it from the .vcxproj file as <ItemGroup> <ProxyDLL Include="FTAccountant" /> </ItemGroup> So the FTAccountant.DLL file is created through the normal build process and then when attempts to compile the proxy stubs it creates these command lines: cl /c dir\_i.c dir\dlldata.c And of course it can't find _i.c. The first attempt, I put %(Filename) in the WHATGOESHERE space and I got this error: C:\ActivePay\Build\Proxy DLL.targets(6,3): error MSB4095: The item metadata %(Filename) is being referenced without an item name. Specify the item name by using %(itemname.Filename). So I changed it to %(itemname.Filename) and that is an empty string. How to get the value specified in the task's Include attribute and use it within the task?

    Read the article

  • How to run a long task in backgroung in iOS applications?

    - by John Canady
    I am developing an application which requires running a task in background. I am trying this by calling a method which will retrieve(download) data from web server through web services. this method will call some more methods which are in different view controller classes. Here when I tap on home button of device, the method is calling but no further execution of the remaining code. This is the code I have written in (void)applicationDidEnterBackground:(UIApplication )application { UIApplication app = [UIApplication sharedApplication]; UIBackgroundTaskIdentifier bgTask = 0; bgTask = [app beginBackgroundTaskWithExpirationHandler:^{ [app endBackgroundTask:bgTask]; bgTask = UIBackgroundTaskInvalid; }]; // Start the long-running task and return immediately. dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ // Do the work associated with the task. NSString *updatekey = [[NSUserDefaults standardUserDefaults] objectForKey:@"updatesetting"]; if([updatekey isEqualToString:@"enabled"]) { DataSettingsView *periodicUpdate = [[DataSettingsView alloc] init]; [periodicUpdate updateDataPeriodically]; //[periodicUpdate viewDidLoad]; //[periodicUpdate release]; } [app endBackgroundTask:bgTask]; bgTask = UIBackgroundTaskInvalid; }); } some one please help me in this background execution of long tasks with some example of code. Some help will appreciated and helpful to me.

    Read the article

  • How to avoid concurrent execution of a time-consuming task without blocking?

    - by Diego V
    I want to efficiently avoid concurrent execution of a time-consuming task in a heavily multi-threaded environment without making threads wait for a lock when another thread is already running the task. Instead, in that scenario, I want them to gracefully fail (i.e. skip its attempt to execute the task) as fast as possible. To illustrate the idea considerer this unsafe (has race condition!) code: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing running = true; try { runExpensiveTask(); } finally { running = false; } } I though about using a variation of Double-Checked Locking (consider that running is a primitive 32-bit field, hence atomic, it could work fine even for Java below 5 without the need of volatile). It could look like this: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing synchronized (ThisClass.class) { if (running) return; running = true; try { runExpensiveTask(); } finally { running = false; } } } Maybe I should also use a local copy of the field as well (not sure now, please tell me). But then I realized that anyway I will end with an inner synchronization block, that still could hold a thread with the right timing at monitor entrance until the original executor leaves the critical section (I know the odds usually are minimal but in this case we are thinking in several threads competing for this long-running resource). So, could you think in a better approach?

    Read the article

  • Use a custom value object or a Guid as an entity identifier in a distributed system?

    - by Kazark
    tl;dr I've been told that in domain-driven design, an identifier for an entity could be a custom value object, i.e. something other than Guid, string, int, etc. Can this really be advisable in a distributed system? Long version I will invent an situation analogous to the one I am currently facing. Say I have a distributed system in which a central concept is an egg. The system allows you to order eggs and see spending reports and inventory-centric data such as quantity on hand, usage, valuation and what have you. There area variety of services backing these behaviors. And say there is also another app which allows you to compose recipes that link to a particular egg type. Now egg type is broken down by the species—ostrich, goose, duck, chicken, quail. This is fine and dandy because it means that users don't end up with ostrich eggs when they wanted quail eggs and whatnot. However, we've been getting complaints because jumbo chicken eggs are not even close to equivalent to small ones. The price is different, and they really aren't substitutable in recipes. And here we thought we were doing users a favor by not overwhelming them with too many options. Currently each of the services (say, OrderSubmitter, EggTypeDefiner, SpendingReportsGenerator, InventoryTracker, RecipeCreator, RecipeTracker, or whatever) are identifying egg types with an industry-standard integer representation the species (let's call it speciesCode). We realize we've goofed up because this change could effect every service. There are two basic proposed solutions: Use a predefined identifier type like Guid as the eggTypeID throughout all the services, but make EggTypeDefiner the only service that knows that this maps to a speciesCode and eggSizeCode (and potentially to an isOrganic flag in the future, or whatever). Use an EggTypeID value object which is a combination of speciesCode and eggSizeCode in every service. I've proposed the first solution because I'm hoping it better encapsulates the definition of what an egg type is in the EggTypeDefiner and will be more resilient to changes, say if some people now want to differentiate eggs by whether or not they are "organic". The second solution is being suggested by some people who understand DDD better than I do in the hopes that less enrichment and lookup will be necessary that way, with the justification that in DDD using a value object as an ID is fine. Also, they are saying that EggTypeDefiner is not a domain and EggType is not an entity and as such should not have a Guid for an ID. However, I'm not sure the second solution is viable. This "value object" is going to have to be serialized into JSON and URLs for GET requests and used with a variety of technologies (C#, JavaScript...) which breaks encapsulation and thus removes any behavior of the identifier value object (is either of the fields optional? etc.) Is this a case where we want to avoid something that would normally be fine in DDD because we are trying to do DDD in a distributed fashion? Summary Can it be a good idea to use a custom value object as an identifier in a distributed system (solution #2)?

    Read the article

  • Pivot Table grand total across columns

    - by Jon
    I'm using Excel 2010 and Power Pivot. I'm trying to calculate confidence and velocity for a development team. I'm extracting some information from our time and defect system each day and building a data set. What I need to do with Excel is do the calculations. So each day I add to my data set 1 row per task in the current project, estimate for that task and the time spent on that task. What I want to calculate is the estimate/actual for each task but also for each person. The trouble is that each day the actual is cumulative so I need to pick out the maximum value for each task. The estimate should remain unchanged. I can make this work at the task level with a calculated measure (=MAX(worked)/MAX(estimate)) but I don't know how to total this up for a person. I need the sum of the max worked for each task. So a dataset might look like: Name Task Estimate Worked N1 T1 3 1 N2 T2 3 1 N3 T3 4 1 N1 T1 3 2 N2 T4 5 1 N3 T3 4 2 N1 T5 1 2 N2 T6 2 3 N3 T7 3 2 What I want to see is for task T1 2 days were worked against an estimate of 3 days - so 2/3. For person N1 I want to see that they worked a total of 4 days against an estimate of 4 days so 4/4. For person N2 they worked 5 days for an estimate of 10 days. Any ideas on how I can achieve this?

    Read the article

  • Time to stop using &ldquo;Execute Package Task&rdquo;&ndash; a way to execute package in SSIS catalog taking advantage of the new project deployment model ,and the logging and reporting feature

    - by Kevin Shyr
    I set out to find a way to dynamically call package in SSIS 2012.  The following are 2 excellent blogs I found; I used them heavily.  The code below has some addition to parameter types and message types, but was made essentially derived entirely from the blogs. http://sqlblog.com/blogs/jamie_thomson/archive/2011/07/16/ssis-logging-in-denali.aspx http://www.ssistalk.com/2012/07/24/quick-tip-run-ssis-2012-packages-synchronously-and-other-execution-options/   The code: Every package will be called by a PackageController package.  The packageController is initialized with some information on which package to run and what information to pass in.   The following is the stored procedure called from the “Execute SQL Task”.  Here is the highlight of the stored procedure It takes in packageName, project name, and folder name (folder in SSIS project deployment to SSIS catalog) The stored procedure sets the package variables of the upcoming package execution Execute package in SSIS Catalog Get the status of the execution.  Also, if exists, get the error message’s message_id and store them in the management database. Return value to “Execute SQL Task” to manage failure properly CREATE PROCEDURE [AUDIT].[LaunchPackageExecutionInSSISCatalog]        @PackageName NVARCHAR(255)        , @ProjectFolder NVARCHAR(255)        , @ProjectName NVARCHAR(255)        , @AuditKey INT        , @DisableNotification BIT        , @PackageExecutionLogID INT AS BEGIN TRY        DECLARE @execution_id BIGINT = 0;        -- Create a package execution        EXEC [SSISDB].[catalog].[create_execution]                     @package_name=@PackageName,                     @execution_id=@execution_id OUTPUT,                     @folder_name=@ProjectFolder,                     @project_name=@ProjectName,                     @use32bitruntime=False;          UPDATE [AUDIT].[PackageInstanceExecutionLog] WITH(ROWLOCK)        SET [SSISCatalogExecutionID] = @execution_id        WHERE [PackageInstanceExecutionLogID] = @PackageExecutionLogID          -- this is to set the execution synchronized so that I can check the result in the end        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'SYNCHRONIZED',                     @parameter_value=1; -- true          /********************************************************         ********************************************************              Section: setting parameters                     Source table:  SSISDB.internal.object_parameters              object_type list:                     20: project level variables                     30: package level variables                     50: execution parameter         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_AuditKey',                     @parameter_value=@AuditKey; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_DisableNotification',                     @parameter_value=@DisableNotification; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=30,                     @parameter_name=N'FromParent_PackageInstanceExecutionID',                     @parameter_value=@PackageExecutionLogID; -- true        /********************************************************         ********************************************************              Section: setting variables END         ********************************************************         ********************************************************/            /* This section is carried over from example code           I don't see a reason to change them yet        */        -- Set our package parameters        EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_EVENT',                     @parameter_value=1; -- true          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_EVENT_CODE',                     @parameter_value=N'0x80040E4D;0x80004005';          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'LOGGING_LEVEL',                     @parameter_value= 1; -- Basic          EXEC [SSISDB].[catalog].[set_execution_parameter_value]                     @execution_id,                      @object_type=50,                     @parameter_name=N'DUMP_ON_ERROR',                     @parameter_value=1; -- true                              /********************************************************         ********************************************************              Section: EXECUTING         ********************************************************         ********************************************************/        EXEC [SSISDB].[catalog].[start_execution]                     @execution_id;        /********************************************************         ********************************************************              Section: EXECUTING END         ********************************************************         ********************************************************/            /********************************************************         ********************************************************              Section: checking execution result                     Source table:  [SSISDB].[catalog].[executions]              status:                     1: created                     2: running                     3: cancelled                     4: failed                     5: pending                     6: ended unexpectedly                     7: succeeded                     8: stopping                     9: completed         ********************************************************         ********************************************************/        if EXISTS(SELECT TOP 1 1                            FROM [SSISDB].[catalog].[executions] WITH(NOLOCK)                            WHERE [execution_id] = @execution_id                                  AND [status] NOT IN (2, 7, 9)) BEGIN                /********************************************************               ********************************************************                     Section: logging error messages                            Source table:  [SSISDB].[internal].[operation_messages]                     message type:                            10:  OnPreValidate                             20:  OnPostValidate                             30:  OnPreExecute                             40:  OnPostExecute                             60:  OnProgress                             70:  OnInformation                             90:  Diagnostic                             110:  OnWarning                            120:  OnError                            130:  Failure                            140:  DiagnosticEx                             200:  Custom events                             400:  OnPipeline                     message source type:                            10:  Messages logged by the entry APIs (e.g. T-SQL, CLR Stored procedures)                             20:  Messages logged by the external process used to run package (ISServerExec)                             30:  Messages logged by the package-level objects                             40:  Messages logged by tasks in the control flow                             50:  Messages logged by containers (For, ForEach, Sequence) in the control flow                             60:  Messages logged by the Data Flow Task                                    ********************************************************               ********************************************************/                INSERT INTO AUDIT.PackageInstanceExecutionOperationErrorLink                     SELECT @PackageExecutionLogID                                  ,[operation_message_id]                            FROM [SSISDB].[internal].[operation_messages] WITH(NOLOCK)                            WHERE operation_id = @execution_id                                  AND message_type IN (120, 130)                           EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, 'SSISDB Internal operation_messages found'                GOTO ReturnTrueAsErrorFlag                /********************************************************               ********************************************************                     Section: checking messages END               ********************************************************               ********************************************************/                /* This part is not really working, so now using rowcount to pass status              --DECLARE @PackageErrorMessage NVARCHAR(4000)              --SET @PackageErrorMessage = @PackageName + 'failed with executionID: ' + CONVERT(VARCHAR(20), @execution_id)                --RAISERROR (@PackageErrorMessage -- Message text.              --     , 18 -- Severity,              --     , 1 -- State,              --     , N'check table AUDIT.PackageInstanceExecutionErrorMessages' -- First argument.              --     );              */        END        ELSE BEGIN              GOTO ReturnFalseAsErrorFlagToSignalSuccess        END        /********************************************************         ********************************************************              Section: checking execution result END         ********************************************************         ********************************************************/ END TRY BEGIN CATCH        DECLARE @SSISCatalogCallError NVARCHAR(MAX)        SELECT @SSISCatalogCallError = ERROR_MESSAGE()          EXEC [AUDIT].[FailPackageInstanceExecution] @PackageExecutionLogID, @SSISCatalogCallError          GOTO ReturnTrueAsErrorFlag END CATCH;     /********************************************************  ********************************************************    Section: end result  ********************************************************  ********************************************************/ ReturnTrueAsErrorFlag:        SELECT CONVERT(BIT, 1) AS PackageExecutionErrorExists ReturnFalseAsErrorFlagToSignalSuccess:        SELECT CONVERT(BIT, 0) AS PackageExecutionErrorExists   GO

    Read the article

  • Customizing Spaces UI

    - by vijaykumar.yenne
    In most common scenarios we stumble up on use cases to customize the Web center spaces UI. Is the Spaces UI customizable? What is the extent to which we can customize? How do i customize it? These are some questions that developers/architects normally come across. Well to clear the air, OOTB spaces comes with some default "site templates" and it also gives a flexibility to create custom site templates suiting the organization needs. The site templates concept has been introduced in the latest PS1 release of webcenter and to customize/create the the new site template, we have to leverage the Extend Spaces Project available on OTN. You could download the the project from here. Also there is white paper available on what all can be customized/extended from spaces perspective listed here . There is a specific details outlined on how to create custom site template in the Customizing Site Template white paper. One of the things the white paper high lights is "While you can create new site templates and modify the sample site templates but you cannot modify either of the out-of-the-box site templates ie the default and maximized. So if my need is to either increase the size of header to fit in a bigger logo or introduce couple of extra links on the default/maximized lay out how do i achieve this? All you need to do is customize the OOTB shell (shell-config.xml). 1. Copy the shell config's available in the Source Files Directory of the extended spaces unzipped directory into the CustomSite Template Project ExtendWebCenterSpaces\CustomSiteTemplate\custom\oracle\webcenter\webcenterapp\metadata\shell 2. Modify the appropriate shell 3. Deploy the CustomSite Template as ADF Jar 4. ensure you have the profile dependency on the aboproject int he custom webcenter spaces project 5. Deploy the Spaces Extension on the Webcenter Spaces Instance. (Details in the first white paper). You should see the changes immediately. eg: In the default shell, i have changed the height from 30 to 60 to increase the header size height="60" This is what i get to see : If you have worked on the R1 release time frame, where you created a custom shell/chrome, how do we make them compatible and make it available in the Spaces PS1 instance? All you need to do is the following: 1. Copy the custom shell in to the shell directory of the custom site template project 2. Register the shell with WCSiteTemplates.xml available in the same project. Eg : Yo can add the below entry pagePath="/oracle/webcenter/webcenterapp/view/templates/MyShellTemplate.jspx" pageDefPath="/oracle/webcenter/webcenterapp/bindings/pageDefs/oracle_webcenter_webcenterapp_view_templates_WebCenterAppShellTemplatePageDef.xml" displayName="myShell" chromeLevel="myShell"/ Note : pagePath - Absolute path of the template JSPX file. This path must be unique. So you might have to do the following to get your custom chrome working absolutely fine with no problems at all: 1. Create a jspx page, say /custom/mysite/SiteTemplate.jspx 2. Include the the default jspx in the new site template like following SiteTemplate.jspx ------------------ 3. Add the newly created site template in the WCSiteTemplate.xml file like following - pagePath="/custom/mysite/SiteTemplate.jspx" pageDefPath="/oracle/webcenter/webcenterapp/bindings/pageDefs/oracle_webcenter_webcenterapp_view_templates_WebCenterAppShellTemplatePageDef.xml" displayName="myShell" chromeLevel="myShell"/

    Read the article

  • Combining Scrum, TFS2010 and Email to keep everyone in the loop

    - by Martin Hinshelwood
    Often you will receive rich information from your Product Owner (Customer) about tasks. That information can be in the form of Word documents, HTML Emails and Pictures, but you generally receive them in the context of an Email. You need to keep these so your Team can refer to it later, and so you can send a “done” when the task has been completed. This preserves the “history” of the task and allows you to keep relevant partied included in any future conversation. At SSW we keep the original email so that we can reply Done and delete the email. But keeping it in your email does not help other members of the team if they complete the task and need to send the “done”. Worse yet, the description field in Team Foundation Server 2010 (TFS 2010) does not support HTML and images, nor does the default task template support an “interested parties” or CC field. You can attach this content manually, but it can be time consuming. Figure: Description only supports plain text, and History supports HTML with no images   What should we do? At SSW we always follow the rules, and it just so happened that we have rules to both achieve this, and to make it easier. You should follow the existing Rules to Better Project Management  and attach the email to your task so you can refer to and reply to it later when you close the task: Do you know what Outlook add-ins you need? Describe the work item request in an email Use Outlook Add-in to move the email to a TFS Work Item When replying to an email with “done” you should follow: Do you update Team Companion template, so the email "subject" doesn't change? Do you update Team Companion template, so you can generate a proper "done" mail? Following these simple rules will help your Product Owner understand you better, and allow your team to more effectively collaborate with each other. An added bonus is that as we are keeping the email history in sync with TFS. When you “reply all” to the email all of the interested partied to the Task are also included. This notified those that may have been blocked by your task to keep up to date with its status. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS. What could we do better? I would like to see this process automated so that we capture the information correctly in the task without the need to use email. This would require a change to the process template in Team Foundation Server to add an “Interested Parties” field. Each reply to the email would need to be automatically processed into a Work Item. This could be done by adding a task identifier as the first item in the “Relates to” email header, and copying in an email address that you watch. This would then parse out the relevant information and add the new message to the history, update the “Interested parties” field and attach the Images. Upon reflection, it may even be possible, but more difficult to do this using ONLY the History field and including some of the header information in there to the build a done email with history. This would not currently deal with email “forks” well, but I think it would be adequate for our needs. It would be nice if we could find time to implement this, but currently it is but a pipe dream. Maybe Microsoft could implement something in the next version of Team Foundation Server, and in the mean time we have a process that works well. Technorati Tags: Scrum,SSW Rules,TFS 2010,TFS 2008

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >