Search Results

Search found 2157 results on 87 pages for 'sequential workflow'.

Page 61/87 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Running Script before/after Variables declared?

    - by Sam
    Hi folks. As i understand javascript .js files are best to put all the way at the bottom of html pages, to speed up loading of rest of page. Advised by Yslow(Yahoo) and Page Speed(google). Now, when in the middle of page some thing RUNS a javascript script, in Internet Explorer, i see a small warning message saying that the element is: Uncaught ReferenceError: SWFObject is not defined When i put my all.js file in the had, the error goes away but page load slows doen. What to do? Actually, i remember it was the same with php variables. If i RUN php but the variable comes later, then it just doesnt work. must define the variable first, for it to run. How to make this workflow better, in case of php scripts? and in case of javsscripts? Thanks!

    Read the article

  • Compact Framework : Read a SQL CE database on a PDA from a PC

    - by CF_Maintainer
    Hello, I have tasked with upgrading a CF Framework 1.1 suite of apps. Currently, the PC starts a server [after confirming via RAPI that the device exists and is connected] and spawns a app on the PDA as the client. The client process on the PDA talks with the db on the PDA and returns records to the PC app [using SQL CE 2.0. OpenNETCF 1.4 for communication/io]. I have a chance to upgrade the PC and PDA suite of apps to Framework 3.5 & CF 3.5 respectively. Due to a business requirement, I cannot get rid of workflow requiring the PC app to show a preview of the work done on the PDA. Question : Are there better ways to achieve the above in general with the constraints I have? I would really appreciate any Ideas/advice.

    Read the article

  • Using emacs across many hosts

    - by mbac32768
    On a daily basis I: use multiple workstations running either Linux, Windows, or MacOS X edit files on additional Linux hosts that are not any of the workstations mentioned above The only common element here is that the internet connects all of these hosts: workstations and servers. I can keep all of the config files in sync on my workstations too and can run an X server on all of them. What's the right way of running emacs? I don't want to sacrifice any features. In my ideal world I can type 'emacs foo.txt' on a remote host and some magic happens via X forwarding to display the file in my workstation's existing emacs session. Non-solutions tramp: when I'm manipulating a remote host an editor is just part of my workflow. I need a terminal open so I can run other commands quickly. tramp is all wrong for this. ncurses emacs: sucks, I want the graphical kind If you don't have a positive answer to my question, please don't just guess. Thanks.

    Read the article

  • In search of opinions on web based version control systems

    - by tom smith
    Hi. Researching various open source, web-based document management/version control systems. I've checked google/questions here, etc... I'm looking for a lightweight web-based (apache) document mgmt/version control app that runs on top of SVN. I need to have the ability to: have multiple users checkin/checkout have a workflow (when userA checks the file in, and finishes the app passes it to the next person, etc... the app needs to allow me to have a structure where the files can be moved as a group. the files will be changed on a monthly basis app needs to have a access/premission control system. some people can see certain files, and perform certain actions on the files I imagine that I'm going to have 40-50 people dealing with the different files. I imagine that I'm going to have 2000-3000 files that have to be massaged. I'd prefer that the app be php based if possible, as opposed to a straight java app. Thanks

    Read the article

  • Sending Tasks using an offline Outlook

    - by ASV
    Hi All, I've a scenario wherein I need to send/assign tasks from my browser UI to the concerned. This should happen with Outlook being offline (or for that matter outlook not even configured on the terminal) so that the user can be accessing a workflow from any terminal (using his/her AD credentials) and if required should have the ability to send a task to the concerned without having to return to his own terminal to be able to do so. I envision that the user's credentials should be used to look up the AD for his/her email ID and send a task using the same from anywhere in the intranet. Using Outlook object library I have been able to assign/send tasks, but with the Outlook being fired up and not otherwise. Redemption does the sync of contacts while Outlook is offline but not tasks. Kindly help if anybody has had a chance to do something similar. Thanks in advance.

    Read the article

  • How to skip "Loose Object" popup when running 'git gui'

    - by Michael Donohue
    When I run 'git gui' I get a popup that says This repository currently has approximately 1500 loose objects. It then suggests compressing the database. I've done this before, and it reduces the loose objects to about 250, but that doesn't suppress the popup. Compressing again doesn't change the number of loose objects. Our current workflow requires significant use of 'rebase' as we are transitioning from Perforce, and Perforce is still the canonical SCM. Once Git is the canonical SCM, we will do regular merges, and the loose objects problem should be greatly mitigated. In the mean time, I'd really like to make this 'helpful' popup go away.

    Read the article

  • Are there any tools to help the user to design a State Machine to be consumed by my application?

    - by kolrie
    When reading this question I remembered there was something I have been researching for a while now and I though Stackoverflow could be of help. I have created a framework that handles applications as state machines. Currently all the state business logic and transactions are handled via Java code. I was looking for some UI implementation that would allow the user to draw the state machines and transactions and generate a file that can later on be consumed by my framework to "run" the workflow according to one or more defined state machines. Ideally I would like to use an open standard like SCXML. The goal as the UI would be to have something like this plugin IBM have for Rational Software Architect: Do you know any editor, plugin or library that would have something similar or at least serve as a good starting point?

    Read the article

  • check whether mmap'ed address is correct

    - by reddot
    I'm writing a high-loaded daemon that should be run on the FreeBSD 8.0 and on Linux as well. The main purpose of daemon is to pass files that are requested by their identifier. Identifier is converted into local filename/file size via request to db. And then I use sequential mmap() calls to pass file blocks with send(). However sometimes there are mismatch of filesize in db and filesize on filesystem (realsize < size in db). In this situation I've sent all real data blocks and when next data block is mapped -- mmap returns no errors, just usual address (I've checked errno variable also, it's equal to zero after mmap). And when daemon tries to send this block it gets Segmentation Fault. (This behaviour is guarantedly issued on FreeBSD 8.0 amd64) I was using safe check before open to ensure size with stat() call. However real life shows to me that segfault still can be raised in rare situtaions. So, my question is there a way to check whether pointer is accessible before dereferencing it? When I've opened core in gdb, gdb says that given address is out of bound. Probably there is another solution somebody can propose.

    Read the article

  • DB Design Question

    - by hazimdikenli
    I am designing an Org Chart, model is almost ready and simplified a bit for clarity here. OrgUnit (OrgUnitId, Name, ReportsToOrgUnitId, ...) OrgUnitJobs (OrgUnitJobId, OrgUnitId, JobName, ReportsToOrgUnitJobId, ... ,IsJobGroup) Employee (EmployeeId, ........) OrgUnitJobEmployee (OrgUnitJobId, EmployeeId, AssignedDate, .....,) so I want to know every OrgUnit's ManagerEmployee (should have one), and Employees can have more than one job, but one of them has to be the main job, so I know whats his manager and other stuff. This is going to support a little workflow behind the scnese, so that is why it is not a very simple Org chart Model. so what would you do, would you add properties like (IsManager property to OrgUnitJobs model) or add ManagerOrgUnitJobId to OrgUnitModel. and why? Likewise, for employees would you add IsPrimaryJob property to OrgUnitJobEmployee model, or add PrimaryJobId to Employee Model.

    Read the article

  • Controlling JIRA via Liferay

    - by Shayan
    I am trying to integrate JIRA into Liferay as a portlet. It seems to be that the best way to do that is using the IFrame portlet. However, there are a few additional things that I need to control. I am trying to model a workflow in JIRA, so depending on what happens within the Liferay portal, Liferay will need to be able to advance the status of tasks in JIRA. In addition, when a task is completed in JIRA, it needs to be able to call back to Liferay and create new users or groups if necessary. Is there some way to integrate JIRA and Liferay so that they can call each others APIs? Thanks.

    Read the article

  • updating system's time using .Net

    - by user62958
    I am trying to update my system time using the following: [StructLayout(LayoutKind.Sequential)] private struct SYSTEMTIME { public ushort wYear; public ushort wMonth; public ushort wDayOfWeek; public ushort wDay; public ushort wHour; public ushort wMinute; public ushort wSecond; public ushort wMilliseconds; } [DllImport("kernel32.dll", EntryPoint = "GetSystemTime", SetLastError = true)] private extern static void Win32GetSystemTime(ref SYSTEMTIME lpSystemTime); [DllImport("kernel32.dll", EntryPoint = "SetSystemTime", SetLastError = true)] private extern static bool Win32SetSystemTime(ref SYSTEMTIME lpSystemTime); public void SetTime() { TimeSystem correctTime = new TimeSystem(); DateTime sysTime = correctTime.GetSystemTime(); // Call the native GetSystemTime method // with the defined structure. SYSTEMTIME systime = new SYSTEMTIME(); Win32GetSystemTime(ref systime); // Set the system clock ahead one hour. systime.wYear = (ushort)sysTime.Year; systime.wMonth = (ushort)sysTime.Month; systime.wDayOfWeek = (ushort)sysTime.DayOfWeek; systime.wDay = (ushort)sysTime.Day; systime.wHour = (ushort)sysTime.Hour; systime.wMinute = (ushort)sysTime.Minute; systime.wSecond = (ushort)sysTime.Second; systime.wMilliseconds = (ushort)sysTime.Millisecond; Win32SetSystemTime(ref systime); } When I debug everything looks good and all the values are correct but when it calles the Win32SetSystemTime(ref systime) th actual time of system(display time) doesn't change and stays the same. The strange part is that when I call the Win32GetSystemTime(ref systime) it gives me the new updated time. Can someone give me some help on this?

    Read the article

  • Git: Help an SVN novice translate trunk/branch concepts to Git

    - by Jasconius
    So I am not much of a source control expert, I've used SVN for projects in the past. I have to use Git for a particular project (client supplied Git repo). My workflow is as such that I will be working on the files from two different computers, and often I need to check in changes that are unstable when I move from place to place so I can continue my work. What then occurs is when, say, the client goes to get the latest version, they will also download the unstable code. In SVN, you can address this by creating a trunk and use working branches, or use the trunk as the working version and create stable branches. What is the equivalent concept in Git, and is there a simple way to do this via Github?

    Read the article

  • Managing Team Development on Shared Website

    - by stjowa
    I need to know the best way to manage team web-development on a shared server (hostgator). I have done some individual web development on a shared server in the past, and I have always setup SVN through SSH to have a pretty-nice development workflow (version control, quick-commits, work though eclipse/subclipse, etc). However, I also know that with that setup, I had to make some pretty-sophisticated post-commit hooks to export the repository to /public_html; and, therefore, making the repository code testable. This seems like a tedious and error-prone setup for an entire team. I would like to be able to: Easily test the latest code in the repository. Somewhat easily move the code in the repository to production. Use an IDE like eclipse/subclipse to easily work with the repository. With this in mind, does anyone know of a good version-control/repository setup for developing a website with a team of about 4-5 people? Thanks a lot.

    Read the article

  • Mercurial setup: One central repo or several?

    - by Robert S.
    My company is switching from Subversion to Mercurial. We're using .NET for our product. We have a solution with about a dozen projects that are separate modules with no dependencies on each other. We're using a central repo on a server with push/pull for our integration build. I'm trying to figure out if I should create one central repo with all the projects in it, or if I should create a separate repo for each project. One argument for separate repos is that branching the individual modules would be easier, but an argument for a single repo is easier management and workflow. I'm very new to hg and DVCS, so some guidance is greatly appreciated.

    Read the article

  • Highlighting a piechart slice from an HTML element (mouseover)

    - by nickhar
    I have a series of HTML table cells with data - an example of which is: <tr id="rrow1"> <td> <a href="/electricity" class="category">Electricity</a> </td> <td> 901.471 </td> </tr> <tr id="rrow2">... <tr id="rrow3">... etc In this case, each <tr> (or hypathetically for the wider community a div/span/tr/td) is assigned a sequential id based on $rrow++; in a while loop (in PHP). I also have a Piechart using the highcharts library, where i'd like to highlight the slice (sliced: true) based upon onmouseover of particular div/span/tr/td element - in this case #rrow1 as above, but multiple/iterative elements as required and (sliced: false) onmouseout... As a simple example, I've tried accessing various derivatives of the following, but failed: $('#rrow1').mouseover(function() { chart.series[0].graph.attr('sliced', true); }); $('#rrow1').mouseout(function() { chart.series[0].graph.attr('sliced', false); }); The nearest I've found is this but bastardised at most and without success: plotOptions: { series: { mouseOver: function() { if( $('#rrow1').mouseover ) series.x = sliced: true; }, mouseOut: function() { if( $('#rrow1').mouseout ) series.x = sliced: false; } } } These are far from approaching correct and despite searching I can't find a valid/helpful example to work from or draw direction. You can view the pie chart in question on jsfiddle here.

    Read the article

  • Start-job to call script from main

    - by Naveen
    I have three script , from main - 1-script , I am calling other two scripts. so that I can execute both scripts parallely because it's taking too much time in sequential order. Only variables are different in the script. How can I merge script 2 & 3 in a single script so that I can call from the main script and it will run as parallel. 1 CompareCtrlM... Completed False localhost ######################... 3 CompareCtrlM... Completed True localhost ######################... Main -1 Script Start-Job -Name "LoopComparectrlMasterModel" -filepath D:\tmp\naveen\Script\CompareCtrlMasterCtrlModel.ps1 Start-Job -Name "LoopCompareProdMasterModel" -filepath D:\idv\CA\rcm_data\tmp\work\CompareCtrlMasterProdModel.ps1 Wait-Job -Name "LoopComparectrlMasterModel" Receive-Job "LoopComparectrlMasterModel" Wait-Job -Name "LoopCompareProdMasterModel" Receive-Job "LoopCompareProdMasterModel" =============================================== Script 2- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterProdModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Prod Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterProdModel[$i-1] $cfgProdModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterProdModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } ========================================================================== Script 3- for ($i = 1 ; $i -lt 3; $i++){ $jobName = 'CompareCtrlMasterCtrlModelESS$i' echolog $THISSCRIPT $RCM_UPDATE_LOG_FILE $LLINFO ("Starting Ctrl Master-Ctrl Model comparison #" + $i + ", create SBT") $rc = CreateSbtFile $sbtCompareCtrlMasterCtrlModel[$i-1] $cfgCtrlModel $cfgCtrlMaster "" "" $SBT_MODE_COMPARE_CFGS_FULL $workDir Start-Job -Name "$jobName" -filepath $ExecuteSbtWithRcmClientTool -ArgumentList $sbtCompareCtrlMasterCtrlModel[$i-1],"",$true,$false | Out-Null Wait-Job -Name "$jobName" $results = Receive-Job -Name $jobName } write-output $results Thanks a lot for help Regards Naven

    Read the article

  • How do you use pip, virtual_env and Frabric to handle deployement?

    - by e-satis
    What are your settings, your tricks, and above all, your workflow. These tools are great but they is still no best practices attached to their usage, so I don't know what is the most efficient way. Do you use pip bundles or always download? Do you set up Apache/Cherokee/MySQl by hand or do you have a script for than. Do you put everything in virtual_env and use --no-site-package? Do you use one virtual_env for several projects? What do you use Fabric for (which part of your deployment do you script)? Do you put your Fabric scripts in on the client or the server? How do you handle database and media file migration? Do you even need a build tool such as SCons? What are the steps of your deployment? How often do you perform each of them? etc.

    Read the article

  • git hooks - regenerate a file and add it to each commit ?

    - by egarcia
    I'd like to automatically generate a file and add it to a commit if it has changed. Is it possible, if so, what hooks should I use? Context: I'm programming a CSS library. It has several CSS files, and at the end I want to produce a compacted and minimized version. Right now my workflow is: Modify the css files x.css and y.css git add x.css y.css Execute minimize.sh which parses all the css files on my lib, minimizes them and produces a min.css file git add min.css git commit -m 'modified x and y doing foo and bar' I would like to have steps 3 and 4 done automatically via a git hook. Is that possible? I've never used git hooks before. After reading the man page, I think I need to use the @pre-commit@ hook. But can I invoke git add min.css, or will I break the internet?

    Read the article

  • How would you start automating my job? - Part 2

    - by Jurily
    (Followup to this question) After surviving the first wave of incoming shipments (9 hours of copy/paste), I now believe I have all the requirements. Here is the updated workflow: Monkey collects email attachments (4 Excel spreadsheets, 1 PDF) Monkey creates central database, does complex calculations (right now this is also an Excel spreadsheet) Monkey sends data to two bosses, who set the retail prices independently; first one to reply wins Monkey sends order form to our other warehouses, also Excel Monkey sends spreadsheets to VIP customers, carefully sanitized and formatted (4 different discount categories) Jurily enters the data into the accounting system. I've given up on automating this part, there's too much business logic involved, and the database is a pile of sh^W legacy My question: What technologies would you use for a quick and dirty solution? I'm mostly sold on C#, but coming from a Linux/C++ background, I'm horribly confused about my choices in Microsoft-land. For bonus points: How would you redesign the whole system from the ground up? P.S. in case you were wondering, my job title is System Administrator.

    Read the article

  • How can I see which shared folders my program has access to?

    - by Kasper Hansen
    My program needs to read and write to folders on other machines that might be in another domain. So I used the System.Runtime.InteropServices to add shared folders. This worked fine when it was hard coded in the main menu of my windows service. But since then something went wrong and I don't know if it is a coding error or configuration error. What is the scope of a shared folder? If a thread in my program adds a shared folder, can the entire local machine see it? Is there a way to view what shared folders has been added? Or is there a way to see when a folder is added? [DllImport("NetApi32.dll", SetLastError = true, CharSet = CharSet.Unicode)] internal static extern System.UInt32 NetUseAdd(string UncServerName, int Level, ref USE_INFO_2 Buf, out uint ParmError); [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] internal struct USE_INFO_2 { internal LPWSTR ui2_local; internal LPWSTR ui2_remote; internal LPWSTR ui2_password; internal DWORD ui2_status; internal DWORD ui2_asg_type; internal DWORD ui2_refcount; internal DWORD ui2_usecount; internal LPWSTR ui2_username; internal LPWSTR ui2_domainname; } private void AddSharedFolder(string name, string domain, string username, string password) { if (name == null || domain == null || username == null || password == null) return; USE_INFO_2 useInfo = new USE_INFO_2(); useInfo.ui2_remote = name; useInfo.ui2_password = password; useInfo.ui2_asg_type = 0; //disk drive useInfo.ui2_usecount = 1; useInfo.ui2_username = username; useInfo.ui2_domainname = domain; uint paramErrorIndex; uint returnCode = NetUseAdd(String.Empty, 2, ref useInfo, out paramErrorIndex); if (returnCode != 0) { throw new Win32Exception((int)returnCode); } }

    Read the article

  • Adding WCF service reference adds DataContract types too

    - by Avi Shilon
    Hi everybody, I've used Visual Studio's Add Service Reference feature to add a service (actually it is a workflow service, created in WF4 RC1, but I don't think this makes any difference), and it also added the DataContracts that the service uses. At first this seemed fine, because All I've had in the DataContracts was simply properties, with no implementations. But now I've added code in the constructor of one data contracts that initializes creates an instance of one of the properties that exposes a list of other DCs, and when I've updated the service reference via VS (2010 RC1), the implementation was not updated. What should I do? Should I use my DCs instead of the ones created by VS or should I use the ones VS created? I've noticed that the properties in the VS-generated DCs contain some additional logic for checking equality in the setters and they also implement some interfaces too (like IExtensibleDataObject and INotifyPropertyChanged) which might get handy I guess in the future (I'm not knowledgeable at WCF). Thank you for your time folks, Avi

    Read the article

  • Simplest distributed persistent key/value store that supports primary key range queries

    - by StaxMan
    I am looking for a properly distributed (i.e. not just sharded) and persisted (not bounded by available memory on single node, or cluster of nodes) key/value ("nosql") store that does support range queries by primary key. So far closest such system is Cassandra, which does above. However, it adds support for other features that are not essential for me. So while I like it (and will consider using it of course), I am trying to figure out if there might be other mature projects that implement what I need. Specifically, for me the only aspect of value I need is to access it as a blob. For key, however, I need range queries (as in, access values ordered, limited by start and/or end values). While values can have structures, there is no need to use that structure for anything on server side (can do client-side data binding, flexible value/content types etc). For added bonus, Cassandra style storage (journaled, all sequential writes) seems quite optimal for my use case. To help filter out answers, I have investigated some alternatives within general domain like: Voldemort (key/value, but no ordering) and CouchDB (just sharded, more batch-oriented); and am aware of systems that are not quite distributed while otherwise qualifying (bdb variants, tokyo cabinet itself (not sure if Tyrant might qualify), redis (in-memory store only)).

    Read the article

  • Suggestions on implementing an iPad magazine app

    - by alku83
    I've been tasked with creating a magazine style app for iPad. Ideally it would look a little something like the Zinio app: http://www.zinio.com/ipad/ . This app is effectively a shell, allowing you to sample magazines (eg. read the first few pages) and select them for download. The magazines appear to have some sort of overlay, allowing you to interact with some things (eg. tap to watch a video). A few questions that come to mind: How would I go about delivering content to the user? In-app purchases aren't really an option, as some content will need to be delivered for free. Is it possible to download a package and make this available within the application? What format would be suitable for displaying the magazine? Sequential images, PDF, ebook? I'll need to have some form of interactivity. I guess I could have some form of lookup table, which would include information such as if the user taps on this page, within these coordinates, then launch this item. Anyone dealt with any similar issues?

    Read the article

  • Generate and merge data with python multiprocessing

    - by Bobby
    I have a list of starting data. I want to apply a function to the starting data that creates a few pieces of new data for each element in the starting data. Some pieces of the new data are the same and I want to remove them. The sequential version is essentially: def create_new_data_for(datum): """make a list of new data from some old datum""" return [datum.modified_copy(k) for k in datum.k_list] data = [some list of data] #some data to start with #generate a list of new data from the old data, we'll reduce it next newdata = [] for d in data: newdata.extend(create_new_data_for(d)) #now reduce the data under ".matches(other)" reduced = [] for d in newdata: for seen in reduced: if d.matches(seen): break #so we haven't seen anything like d yet seen.append(d) #now reduced is finished and is what we want! I want to speed this up with multiprocessing. I was thinking that I could use a multiprocessing.Queue for the generation. Each process would just put the stuff it creates on, and when the processes are reducing the data, they can just get the data from the Queue. But I'm not sure how to have the different process loop over reduced and modify it without any race conditions or other issues. What is the best way to do this safely? or is there a different way to accomplish this goal better?

    Read the article

  • How well do (D)VCS cooperate with workflows involving several people editing files in the same direc

    - by frankster
    Imagine because of tradition that your team's preferred development method involved several people with a shared login, editing files on a build server using vim. [Note that there are well known issues to do with only one person being able to edit a file at once, people going away from their desk and leaving the file locked in vim, system builds/restarts requiring everybody to stop debugging while this occurs. This is not what the question is about] If source control was to be introduced without changing the workflow, would there be much benefit? I am guessing that the commit history won't be much use as it will contain all changes by everybody in big lumps. So it wouldn't really be possible to rewind individual changes apart from at a really big level.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >