Search Results

Search found 5018 results on 201 pages for 'sharepoint workflow'.

Page 184/201 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Macbook Pro - 15" with i7 processor - Any problems with heat?

    - by webworm
    You may have already heard about the review done by the folks at PC Authority in Australia, where they had an i7 MacBook Pro that got up to 100 degrees Celsius during benchmarking. Here is the URL in case you have not read it. http://www.pcauthority.com.au/News/172791,macbook-pro-helps-core-i7-hit-100-degrees.aspx In any case, I was considering purchasing a 15" Macbook Pro with the i7 processor and the NVIDIA GeForce GT330M with 512 video memory. Having read how hot the computer got I started to become hesitant about purchasing. My main concern is long term damage to the computer due to excessive heat. I plan to use the MacBook Pro as a development machine where I will be running Windows 7 within VMWare Fusion or Virtual Box. Within the VM I will be running IIS, SQL Server, Visual Studio and SharePoint Server. Hence why I would like to have the power of the i7 processor. That is why I wanted to check with actually owners of the MacBooks with the i7 processor and see what their experiences have been. Have you noticed excessive heat? How does your Macbook handle process intensive apps over long periods of time? Thank you!

    Read the article

  • Share Exchange Calendar Outside Organization

    - by CalCurious
    I'm trying to figure out the best way to meet a user's (Corp-A-User) request to share their calendar with someone at another company (Corp-B-User). We're running Microsoft SBS 2008 with Exchange 2007 and SharePoint. The remote user is running Exchange, version unknown. Corp-A-User wants to give the Corp-B-User the ability to create appointments on Corp-A-User's calendar. This will naturally require sharing of Free/Busy information. Corp-A-User naturally lacks the vision to seen ANY problem with giving Corp-B-User full access to their calendar. But, I see the problems with that and would prefer that Corp-B-User have only the ability to see Free/Busy and create appointments. Most of the external publishing options that I have thought of, such as WebDav, allow displaying a user's calendar, but there are problems with security and the ability to create appointments. Right now, I'm thinking the cleanest solution would be to use a Google calendar along with Google Calendar Sync for the two user's Outlook clients. But, I'm not sure if there isn't a better way and I hate teh idea of pushing a corporate calendar up to Google. Not to mention the issues likely to pop up from the multiple sync paths. Does any one have a good solution for this scenario that would be willing to share what they use?

    Read the article

  • What's next for all of these Microsoft "overlapping" and "enhanced" products ?

    - by indyvoyage
    Recently I attended a road show, organised by MS Gold Partner company in the UK. The products discussed were: SharePoint server (2010 and 2007), Exchange server, Office Communication Server 2007, Exchange hosted services Office Live meeting, Office Communicator, System Center Configuration Manager and Operation Manager, VMware, Windows 7 etc. As Microsoft claims the enhancement in the each product against higher version, I felt that clients are not much interested in all these details. For example Office Communicator, surely they have improved a lot the product and first site all said 'WOW' great product, but nobody wish to pay money for all these extra features. Some argued, they are bogged down by all these increased number of menus. They don't need soft call feature included with mobile call. It apply for all other products as well such as MS office (next what 2 ribbons ?), windows OS and many more. Indeed there must be good features in all these products, but is it worth to spend money and time to update the older system ? Also sometimes these feature will decrease the productivity instead increase it. *So do you think what ever enhancement MS is doing in the products is only for selling purpose, not a real use ?? and I think also keep the developer busy learning the new tools and features. * I am sure some some people here will argue that some people need this sort of features. But I am not talking about NASA or MI5 guys. I am talking of usual businesses and joe public. Any ideas welcome.

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • Can I run Excel 2010 on a server?

    - by Glen Little
    This question is not about a person using Excel on a computer that happens to have an Windows Server OS. And it is not about using any Sharepoint services features! The question is about automated processes that use code (Office Automation) to open Excel files, manipulate them, run calculations, read data, save copies of the file and close the files... all in code. In previous versions of Excel the licensing agreement prevented use on a public server, notes from Microsoft warned about the problems trying to use Office Automation in a server environment, and we were warned that Excel was single threaded and not designed for use on a server. Most of the articles about this were written before Office 2010. But now, Excel 2010 is designed to work on a High Performance Computing server using HPC Services for Excel. One HPC document mentions "Windows HPC Server 2008 R2 includes a comprehensive pop-up manager that can handle occasional dialog boxes and pop-up messages". So my question is... is it now "safe" to run code that automates Excel 2010 on a "normal" server without using the HPC services? If not, can the HPC Services for Excel work on a single server? I don't need the high performance, distributed computing, aspect of HPC Services for Excel... just the ability to run Excel on a server. Can that now be done? Thanks, Glen

    Read the article

  • Kindly guide me to buy a new laptop [on hold]

    - by Its me 007
    I am from India. I want to buy a new laptop. Shortlisted few but confused between which processor,Chip set and Graphics will be the best suited for my requirements. NOTE: NOT ABLE TO POST THE LINKS YOU WILL HAVE TO COPY PASTE IT. SORRY. 1) HP Pavilion 15-N004TX - 4th Gen CI5 - 4200U/4GB RAM/500 GB HDD/ 1GB Radeon Graphic - Rs 39990 www.homeshop18.com/hp-pavilion-15-n004tx-laptop-4th-gen-intel-core-i5-4200u-4gb-500gb-15-6-linux-silver-black/computers-tablets/laptops/product:30989197/cid:16317/ 2) Lenovo Essential G510 (59-398452) - 4th Gen Ci5 4200M/ 4GB/ 500 GB/Win8/2GB Graph ATI Sunpro 8570 - Rs 44969 www.flipkart.com/lenovo-essential-g510-59-398452-laptop-4th-gen-ci5-4gb-500gb-win8-2gb-graph/p/itmdp26eprwf5k5v?gclid=CMnh99GA2LoCFaRU4godNiUAGQ&semcmpid=sem_7847244212_laptopsnew_goog&tgi=sem%2C1%2CG%2C7847244212%2Cg%2Csearch%2C%2C24387103114%2C1t1%2Cb%2C%2Blenovo+%2Bg510%2F59+%2B398452%2Cc%2C%2C%2C%2C%2C%2C%2C2 3) HP Pavilion G6-2303TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 500GB/ DOS/ 1GB Graph) - Rs 40500 www.flipkart.com/hp-pavilion-g6-2303tx-laptop-3rd-gen-ci5-4gb-500gb-dos-1gb-graph/p/itmdm6yzh4gr4cxd?pid=COMDM6YHWMGDRDEZ&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 4) HP Pavilion 15-E039TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 1TB/ Win8/ 2GB Graph) - Rs 46690 www.flipkart.com/hp-pavilion-15-e039tx-laptop-3rd-gen-ci5-4gb-1tb-win8-2gb-graph/p/itmdn4d9wykhdcpz?pid=COMDN4CZGFMGJNTN&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 Now I am confused between: Which Processor and chipset is best? How much graphic card is enough? (Not a gamer) Is any of this laptop future proof i.e. it should at least support upcoming latest programming softwares which eats more processor and memory. Laptop will be mainly used for multiprocessing.It should be at least capable for following: Visual Studio 2012 and the upcoming versions for at least 4 years SQL server 2008 R2 and above Sharepoint Blend Photoshop Kindly suggest. If anyone know any good laptop with good configuration in the 50k budget kindly suggest. Thanks in advance.

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • Office documents on intranet all requiring second login and can't pass auth? Disable webdav?

    - by DOTang
    I am not sure what is going on, but recently all the Office documents on our intranet get prompted a second time for login and according to the error logs it looks like it's trying to use webdav to open (an editable?) version of the document to save directly on the server? We have no sharepoint server setup or anything, but this shouldn't be happening. All I want is for the document to be saved or opened from a local copy in temp like normal. Here is the log: Line 57499: 2011-04-12 15:57:10 (ip) OPTIONS (address) - 443 (username) (user ip) Microsoft-WebDAV-MiniRedir/6.1.7601 - 401 1 1326 1525 238 0 Line 57500: 2011-04-12 15:57:10 (ip) OPTIONS (address) - 443 (username) (user ip) Microsoft-WebDAV-MiniRedir/6.1.7601 - 401 1 1326 1525 238 0 Line 57501: 2011-04-12 15:57:10 (ip) OPTIONS (address) - 443 (username) (user ip) Microsoft-WebDAV-MiniRedir/6.1.7601 - 401 1 1326 1525 238 0 The log basically contains a bunch of these. How can I disable this behavior so that office documents that are downloaded aren't attempted to be used through webdav?? Edit: I should clarify behavior, it asks if you want to save or open it, upon choosing open open, it asks to re-authenicate, you put in the user information and the login box comes up 3 times acting like you entered the wrong password. For some users, after passing the login box the third time, it still opens up, for others their browser just locks up. It also doesn't even look like webdav is installed on our server, I see no config options in IIS for it as outlined on this page: http://learn.iis.net/page.aspx/350/installing-and-configuring-webdav-on-iis-7/#001

    Read the article

  • Consulting: Organizing site/environment documentation for customers?

    - by ewwhite
    Over time, I've taken on consulting and contract engineering work for various clients. More recently, customers are asking for certain types of documentation. These are small businesses and typically do not have dedicated technical staff. Within a single company, Wiki/Confluence/Sharepoint, etc. all make sense as a central repository for documentation and environment information. I struggle with finding a consistent method to deliver the following information to discrete customers. I'm shooting for a process that's more portable, secure and elegant than a simple spreadsheet or the dreaded binder full of outdated information. Important IP addresses, DHCP scope, etc. Network diagram (if needed). Administrative usernames and passwords and management URLs. Software license keys. Support contracts and warranty information. Vendor support contacts and instructions. I know there are other consultants here. Any suggestions or tips on maintaining documentation across multiple environments in a customer-friendly format? How do you do it?

    Read the article

  • Dynamics CRM error "A currency is required if a value exists in a money field" after converting Acti

    - by Evgeny
    We have a Dynamics CRM 4.0 instance with some custom attributes of type "money" on the Case entity and on all Activity entities (Email, Phone Call, etc.) When I use the built-in "Convert Activity to Case" functionality I find that the resulting Case does not have a Currency set, even if the Activity it was created from does have it. Whenever the case is opened the user then gets this JavaScript error: A currency is required if a value exists in a money field. Select a currency and try again. This is extremely annoying! How do I fix it? Is there any way I can set the currency? It needs to be done synchronously, because the Case is opened immediately when it's created from an Activity. So even if I started a workflow to set the currency the user would still get that error at least once. Alterntatively, can I just suppress the warning somehow? I don't really care about setting the Currency, I just want the error gone. Thanks in advance for any help!

    Read the article

  • Error message: "Two different contracts have the same ConfigurationName" when downloading wsdl from

    - by rwwilden
    I get the following error message when I try to use svcutil to generate a client proxy for a xamlx file that is hosted by AppFabric beta 2: Two different contracts have the same ConfigurationName I understand the message, however, I cannot find its cause or how to fix it. I'm following the 'Introduction to Workflow Services' lab from the VS2010RC training kit. The web application has two services: SubmitApplication.xamlx and EducationScreening.xamlx. I'm not sure why but both of them have four endpoints. If I take a look via the AppFabric Dashboard in IIS Mgmt Studio: basicHttpBinding (Contract: *) (Type: Application(Default)) netNamedPipeBinding (Contract: System.ServiceModel.Activities.IWorkflowInstanceManagement) (Type: System (workflowControlEndpoint)) netNamedPipeBinding (Contract: *) (Type: Application (Default)) serviceMetadataHttpGetBinding (Contract: serviceMetadataHttpGetContract) (Type: System (serviceMetadataEndpoint)) When taking a look at the SubmitApplication.xamlx in a browser, I see the following stacktrace: [InvalidOperationException: Two different contracts have the same ConfigurationName.] System.ServiceModel.Activities.WorkflowServiceHost.CreateDescription(IDictionary`2& implementedContracts) +361 System.ServiceModel.ServiceHostBase.InitializeDescription(UriSchemeKeyedCollection baseAddresses) +174 System.ServiceModel.Activities.WorkflowServiceHost.InitializeDescription(WorkflowService serviceDefinition, UriSchemeKeyedCollection baseAddresses) +82 System.ServiceModel.Activities.WorkflowServiceHost.InitializeFromConstructor(WorkflowService serviceDefinition, Uri[] baseAddresses) +206 System.ServiceModel.Activities.Activation.WorkflowServiceHostFactory.CreateWorkflowServiceHost(WorkflowService service, Uri[] baseAddresses) +43 System.ServiceModel.Activities.Activation.WorkflowServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) +974 System.ServiceModel.HostingManager.CreateService(String normalizedVirtualPath) +1423 System.ServiceModel.HostingManager.ActivateService(String normalizedVirtualPath) +50 System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) +1132 [ServiceActivationException: The service '/HRApplicationServices/SubmitApplication.xamlx' cannot be activated due to an exception during compilation. The exception message is: Two different contracts have the same ConfigurationName..] System.Runtime.AsyncResult.End(IAsyncResult result) +889824 System.ServiceModel.Activation.HostedHttpRequestAsyncResult.End(IAsyncResult result) +179150 System.Web.AsyncEventExecutionStep.OnAsyncEventCompletion(IAsyncResult ar) +107 Can anyone tell me what I'm doing wrong? I haven't configured any of the bindings myself. The BasicHttpBinding is what you get by default in .NET 4 when hosting a service inside a web application. The other bindings are configured by AppFabric. I can't find their configuration anywhere. Kind regards, Ronald Wildenberg

    Read the article

  • Instructions on using TortoiseGit to interact with an SVN repository?

    - by markerikson
    I've been using TortoiseSVN on Windows for years with local filesystem repositories for my own projects. I'm planning to start collaborating with a friend on one of the projects, and will be shifting the repository to my own website. I've read a lot of "git beats SVN!" posts over the last couple years, and figured I ought to at least see what the fuss was about. Some research turned up the "git svn" command, and that TortoiseGit claims to have some level of git-svn support. I like the idea of keeping the SVN repository, and doing some local commits or branches with git before committing them to the repository. The "shelve" command also sounds useful. Unfortunately, while there's a number of CLI git-svn tutorials, there's nothing for TortoiseGit (which admittedly seems to be still in early development). As a result, I'm having problems trying to figure out what workflow I need to get these pieces to cooperate. I have an SVN repository in D:\Projects\repositories\MyProject. I created D:\Projects\temp\gittest, and tried to do a TortoiseGit "Git Clone" of the repository. From there, I've had issues trying to indicate the location of the trunk/branches/tags folders (which are just the standard layout in my repository). I was only able to get useful results when I left those unchecked. When I did seem to get the git repository started correctly, I was able to make some changes and do a couple git commits, but then had problems doing an SVN DCommit. So, I'm hoping someone out there can provide a reasonably detailed set of instructions on how to correctly use TortoiseGit with an existing SVN repository (with the repository on either the local filesystem or on a remote server). No "don't use SVN!" responses, please - I'm interested in learning how to get these two pieces to work together. If you feel TortoiseGit's SVN support isn't mature enough to make this work, that would also be useful information. Thanks!

    Read the article

  • What is the difference between System.Speech.Recognition and Microsoft.Speech.Recognition?

    - by Michael
    There are two similar namespaces and assemblies for speech recognition in .NET. I’m trying to understand the differences and when it is appropriate to use one or the other. There is System.Speech.Recognition from the assembly System.Speech (in System.Speech.dll). System.Speech.dll is a core DLL in the .NET Framework class library 3.0 and later There is also Microsoft.Speech.Recognition from the assembly Microsoft.Speech (in microsoft.speech.dll). Microsoft.Speech.dll is part of the UCMA 2.0 SDK I find the docs confusing and I have the following questions: System.Speech.Recognition says it is for "The Windows Desktop Speech Technology", does this mean it cannot be used on a server OS or cannot be used for high scale applications? The UCMA 2.0 Speech SDK ( http://msdn.microsoft.com/en-us/library/dd266409%28v=office.13%29.aspx ) says that it requires Microsoft Office Communications Server 2007 R2 as a prerequisite. However, I’ve been told at conferences and meetings that if I do not require OCS features like presence and workflow I can use the UCMA 2.0 Speech API without OCS. Is this true? If I’m building a simple recognition app for a server application (say I wanted to automatically transcribe voice mails) and I don’t need features of OCS, what are the differences between the two APIs?

    Read the article

  • GIT repository layout for server with multiple projects

    - by Paul Alexander
    One of the things I like about the way I have Subversion set up is that I can have a single main repository with multiple projects. When I want to work on a project I can check out just that project. Like this \main \ProductA \ProductB \Shared then svn checkout http://.../main/ProductA As a new user to git I want to explore a bit of best practice in the field before committing to a specific workflow. From what I've read so far, git stores everything in a single .git folder at the root of the project tree. So I could do one of two things. Set up a separate project for each Product. Set up a single massive project and store products in sub folders. There are dependencies between the products, so the single massive project seems appropriate. We'll be using a server where all the developers can share their code. I've already got this working over SSH & HTTP and that part I love. However, the repositories in SVN are already many GB in size so dragging around the entire repository on each machine seems like a bad idea - especially since we're billed for excessive network bandwidth. I'd imagine that the Linux kernel project repositories are equally large so there must be a proper way of handling this with Git but I just haven't figured it out yet. Are there any guidelines or best practices for working with very large multi-project repositories?

    Read the article

  • R and version control for the solo data analyst

    - by Jeromy Anglim
    Many data analysts that I respect use version control. For example: http://github.com/hadley/ See comments on http://permut.wordpress.com/2010/04/21/revision-control-statistics-bleg/ However, I'm evaluating whether adopting a version control system such as git would be worthwhile. A brief overview: I'm a social scientist who uses R to analyse data for research publications. I don't currently produce R packages. My R code for a project typically includes a few thousand lines of code for data input, cleaning, manipulation, analyses, and output generation. Publications are typically written using LaTeX. With regards to version control there are many benefits which I have read about, yet they seem to be less relevant to the solo data analyst. Backup: I have a backup system already in place. Forking and rewinding: I've never felt the need to do this, but I can see how it could be useful (e.g., you are preparing multiple journal articles based on the same dataset; you are preparing a report that is updated monthly, etc) Collaboration: Most of the time I am analysing data myself, thus, I woudln't get the collaboration benefits of version control. There are also several potential costs involved with adopting version control: Time to evaluate and learn a version control system A possible increase in complexity over my current file management system However, I still have the feeling that I'm missing something. General guides on version control seem to be addressed more towards computer scientists than data analysts. Thus, specifically in relation to data analysts in circumstances similar to those listed above: Is version control worth the effort? What are the main pros and cons of adopting version control? What is a good strategy for getting started with version control for data analysis with R (e.g., examples, workflow ideas, software, links to guides)?

    Read the article

  • Database Change Management - Setup for Initial Create Scripts, Subsequent Migration Scripts

    - by Martin Aatmaa
    I've got a database change management workflow in place. It's based on SQL scripts (so, it's not a managed code-based solution). The basic setup looks like this: Initial/ Generate Initial Schema.sql Generate Initial Required Data.sql Generate Initial Test Data.sql Migration 0001_MigrationScriptForChangeOne.sql 0002_MigrationScriptForChangeTwo.sql ... The process to spin up a database is to then run all the Initlal scripts, and then run the sequential Migration scripts. A tool takes case of the versioning requirements, etc. My question is, in this kind of setup, is it useful to also maintain this: Current/ Stored Procedures/ dbo.MyStoredProcedureCreateScript.sql ... Tables/ dbo.MyTableCreateScript.sql ... ... By "this" I mean a directory of scripts (separated by object type) that represents the create scripts for spinning up the current/latest version of the database. For some reason, I really like the idea, but I can't concretely justify it's need. Am I missing something? The advantages would be: For dev and source control, we would have the same object-per-file setup that we're used to For deployment, we can spin up a new DB instance to the latest version either by running the Initial+Migrate, or by running the scripts from Current/ For dev, we do not need a DB instance running in order to do development. We can do "offline" development on the Current/ folder. The disadvantages would be: For each change, we need to update the scripts in the Current/ folder, as well as create a Migration script (in the Migration/ folder) Thanks in advance for any input!

    Read the article

  • Best practice: git, github, lighthouse and 2 developers

    - by Alxandr
    I'm setting up a new project and plan on using git and github for sourcecontroll and hosting of repo and lighthouse for bugtracking. I've been working with git for some while now, but only been using it for more of a backup solution than collaborate coding solution. Also, I've noticed that in github you can setup a servicehook to lighthouse so that whenever you push to github it notifies lighthouse of the changes. This uses a token for user-authentication and has the ability to change tickets to resolved etc. However, this token I believe functions that way so that whenever a user pushes to the repo (dosn't matter who), it's the owner of the repo that "updates" to lighthouse. This is a problem. So, I believe it is necessary with 2 separate repos at github (one for each dev), and I'm wondering about the workflow that should be used. Any1 care to shred any light on this matter? Like when to pull and push (and where), and how to make the two github repos in sync or something like that? Or another solution to the problem altogether.

    Read the article

  • Storing a digital signature for bookings on a web based system

    - by Duncan
    I have a web based bookings system built for a UK higher education client to allow students to sign out equipment (laptops, camera's etc). It's been in use successfully for a couple of years, in the current workflow equipment is collected and the booking is printed, signed by the student and kept until the equipment is returned. They are emailed a pdf copy of the booking and reminders if equipment is outstanding. Students can login and prebook equipment using their university LDAP credentials, the booking is then authorised by staff for later collection, but can also walk in and have equipment booked out by staff. They would like to remove the signed paper part of the process and replace this with some sort of digital signature. The suggestion was a graphics tablet but with a web based system this would require a local software package and in my view be impractical. My thought is that students would enter their LDAP username and password upon collection of the equipment, verifying their identity and effectively digitally signing the booking. My question is what would be best to store as a signature or whether to simply authenticate the user and use a boolean flag to indicate that this has been done could be deemed sufficient?

    Read the article

  • Dynamic State Machine in Ruby? Do State Machines Have to be Classes?

    - by viatropos
    Question is, are state machines always defined statically (on classes)? Or is there a way for me to have it so each instance of the class with has it's own set of states? I'm checking out Stonepath for implementing a Task Engine. I don't really see the distinction between "states" and "tasks" in there, so I'm thinking I could just map a Task directly to a state. This would allow me to be able to define task-lists (or workflows) dynamically, without having to do things like: aasm_event :evaluate do transitions :to => :in_evaluation, :from => :pending end aasm_event :accept do transitions :to => :accepted, :from => :pending end aasm_event :reject do transitions :to => :rejected, :from => :pending end Instead, a WorkItem (the main workflow/task manager model), would just have many tasks. Then the tasks would work like states, so I could do something like this: aasm_initial_state :initial tasks.each do |task| aasm_state task.name.to_sym end previous_state = nil tasks.each do |tasks| aasm_event task.name.to_sym do transitions :to => "#{task.name}_phase".to_sym, :from => previous_state ? "#{task.name}_phase" : "initial" end previous_state = state end However, I can't do that with the aasm gem because those methods (aasm_state and aasm_event) are class methods, so every instance of the class with that state machine has the same states. I want it so a "WorkItem" or "TaskList" dynmically creates a sequence of states and transitions based on the tasks it has. This would allow me to dynamically define workflows and just have states map to tasks. Are state machines ever used like this?

    Read the article

  • FxCop hates my usage of MVVM

    - by Dave
    I've just started to work with FxCop to see how poorly my code does against its full set of rules. I'm starting off with the "Breaking" rules, and the first one I came across was CA2227, which basically says that you should make a collection property's setter readonly, so that you can't accidentally change the collection data. Since I'm using MVVM, I've found it very convenient to use an ObservableCollection with get/set properties because it makes my GUI updates easy and concise in the code-behind. However, I can also see what FxCop is complaining about. Another situation that I just ran into is with WF, where I need to set the parameters when creating the workflow, and I'd hate to have to write a wrapper class around the collection I'm using just to avoid this particular error message. For example, here's a sample runtime error message that I get when I make properties readonly: The activity 'MyWorkflow' has no public writable property named 'MyCollectionOfStuff' What are you opinions on this? I could either ignore this particular error, but that's probably not good because I could conceivably violate this rule elsewhere in the code where MVVM doesn't apply (model only code, for example). I think I could also change it from a property to a class with methods to manipulate the underlying collection, and then raise the necessary notification from the setter method. I'm a little confused... can anyone shed some light on this?

    Read the article

  • Publishing toolchain

    - by Marcelo de Moraes Serpa
    Hello all, I have a book project which I'd like to start sooner than later. This would follow an agile-like publishing workflow, i.e: publish early and often. It is meant to be self-publsihed by me and I'm not really looking to paper-publish it, even though we never know. If I weren't a geek, I'd probably have already started writting in Word or any other WYSIWYG tool and just export to PDF. However, we know it is not the best solution, and emacs rules my text-editing life, so, the output format should be as simple as possible and be text-based. I've thought about the following options: 1) Use orgmode and export to PDF; 2) Use markdown mode and export to PDF; 3) Use something similar to what the guys @ Pragmatic Progammers do: A XML + XSLT + LaTeX. More complex, but much more control over the style. Any other ideas / references ? I want to start writting as soon as possible. In fact, I already have a draft in an org-formatted file. However, I do want to have and use the full power of LaTex later on to format it the way I want and make it look fabulous :) Thanks in advance, Marcelo.

    Read the article

  • How do I create a C# .NET Web Service that Posts items to a users Facebook Wall?

    - by Jourdan
    I'm currently toying around with the Clarity .NET Facebook API but am finding certain situations with authentication to be kind of limiting. I keep going through the tutorials but always end up hitting a brick wall with what I want to do. Perhaps I just cannot do it? I want to make a Web Service that takes in the require credentials (APIKey, SecretKey, UsersId (or Session Key?) and whatever else I would need), and then do various tasks: Post to users wall, add events etc. The problem I am having is this: The current documentation, examples and support provide a way to do this within the context of a Web site. Within this context, the required "connect" popup can be initiated and allow the user to authenticate and and connect the application. From that point on the Web can go on with its business to do what it needs to do. If I close the browser and come back to the page, I have to push the connect button again. Except this time, since I was already logged into facebook, I don't have to go through the whole connection process. But still ... How do applications like Tweetdeck get around this? They seemingly have you connect once, when you install their application, and you don't have to do it again. I would assume that this same idea would have to applied towards making a web service because: You don't know what context the user is in when making the Web service call. The web service methods being called could be coming from a Windows Form app, or code behind in a workflow.

    Read the article

  • Cocoa memory management

    - by silvio
    At various points during my application's workflow, I need so show a view. That view is quite memory intensive, so I want it to be deallocated when it gets discarded by the user. So, I wrote the following code: - (MyView *)myView { if (myView != nil) return myView; myView = [[UIView alloc] initWithFrame:CGRectZero]; // allocate memory if necessary. // further init here return myView; } - (void)discardView { [myView discard]; // the discard methods puts the view offscreen. [myView release]; // free memory! } - (void)showView { view = [self myView]; // more code that puts the view onscreen. } Unfortunately, this methods only works the first time. Subsequent requests to put the view onscreen result in "message sent to deallocated instance" errors. Apparently, a deallocated instance isn't the same thing as nil. I thought about putting an additional line after [myView release] that reads myView = nil. However, that could result in errors (any calls to myView after that line would probably yield errors). So, how can I solve this problem?

    Read the article

  • Tools for Maintaining Branches in SVN

    - by Chris Conway
    My team uses SVN for source control. Recently, I've been working on a branch with occasional merges from the trunk and it's been a fairly annoying experience (cf. Joel Spolsky's "Subversion Story #1"), so I've been looking alternative ways to manage branches and merging. Given that a centralized SVN repository is non-negotiable, what I'd like is a set of tools that satisfy the following conditions. Complete revision history should be stored in SVN for both trunk and branches. Merging in either direction (and potentially criss-crossing) should be relatively painless. Merging history should be stored in SVN to the greatest extent possible. I've looked at both git-svn and bzr-svn and neither seems to be up to the job—basically, given the revision history they can export from the SVN repository, they can't seem to do any better a job handling merges than SVN can. For example, after cloning the repository with git, the revision history for my branch shows the original branch off of trunk, but git doesn't "see" any of the interim SVN merges as "native" merges—the revision history is one long line. As a result, any attempts to merge from trunk in git yield just as many conflicts as an SVN merge would. (Besides, the git-svn documentation explicitly warns against using git to merge between branches.) Is there a way to adjust my workflow to make git satisfy the above requirements? Maybe I just need tips or tricks (or a separate merging tool?) to help SVN be better at merging into branches?

    Read the article

  • How to best implement Version Control for Web Development?

    - by Adam Taylor
    Version control systems are obviously important in development projects but there use in web development projects appears to be more complex, what with the requirement of having a web server to run all but the simplest of web applications. With that in mind, I have looked around and discovered a few different methods of using version control in web development projects: Provide each developer with a virtual machine which is a replication of the development server and have the developer run their working copy of the application in the virtual machine. Have each developer use a sub domain on the development server, e.g. john.project.com and checkout their working copy of the app to the directories the sub domain points to. Use the version control system to checkout code, make a change, commit the code and then check it on the development server (which points to the head of the repository). I can see a drawback of 1 being the added time required to create the virtual machines and ensure that the virtual machines are kept insync with the development server (also the need(?) to continuously change the developers host file to point at the virtual machine not the development server). I can see 2 possibly being a problem if absolute URLs are used within the site unless there is an easy way to update the configuration to use the new subdomains as well. 3 is the easiest to set up but is rather primitive and it will presumably become quite tedious for a developer to keep checking in the code after every time change. How have the users of stackoverflow used version control with web development projects and which method/workflow was most effective. Please also include extra methods I haven't thought of / read about.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >