Search Results

Search found 245 results on 10 pages for 'commission junction'.

Page 9/10 | < Previous Page | 5 6 7 8 9 10  | Next Page >

  • USB question (how durable is it, how should I workaround this)

    - by Shiki
    The plot is quite simple. Got a Razer mouse. If I plug it in, it works. After a shutdown/hibernation, I have to replug it entirely at the back of the PC. (It works in my laptop even after severel shutdown, etc, so yes I guess it's my motherboard.. but it still got 2 years of warranty and it comes with quad SLI, its not an old motherboard at all. (MSI P7N SLI FI (bought it after a hungarian guy's recommendation)). So. I only could come up with one "solution". Get 3 USB cable (you know, USB-USB). If its possible the shortest ones (don't know if the responsibility/anything will worsen), AND replug only the middle+closest to the USB port junction, since those are replaceable. What do you think? Any other idea? (BIOS is updated, mouse driver ... doesn't really matter, the mouse won't even blink a bit after this happens. It lights up and goes totally dead.)

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • So it comes to PASS…

    - by Tony Davis
    How does your company gauge the benefit of attending a technical conference? What's the best change you made as a direct result of attendance? It's time again for the PASS Summit and I, like most people go with a set of general goals for enhancing technical knowledge; to learn more about PowerShell, to drill into SQL Server performance tuning techniques, and so on. Most will write up a brief report on the event for the rest of the team. Ideally, however, it will go a bit further than that; each conference should result in a specific improvement to one of your systems, or in the way you do your job. As co-editor of Simple-talk.com, and responsible for the majority of our SQL books, my “high level” goals don't vary much from conference to conference. I'm always on the lookout for good new authors. I target interesting new technologies and tools and try to learn more. I return with a list of actions, new articles to commission, and potential new authors. Three years ago, however, I started setting myself the goal of implementing “one new thing” after each conference. After one, I adopted Kanban for managing my workload, a technique that places strict limits on “work in progress” and makes the overall workload, and backlog, highly visible. After another I trialled a community book project. At PASS 2010, one of my general goals was to delve deeper into SQL Server transaction log mechanics, but on top of that, I set a specific goal of writing something useful on the topic. I started a Stairway series and, ultimately, it's turned into a book! If you're attending the PASS Summit this year, take some time to consider what specific improvement or change you'll implement as a result. Also, try to drop by the Red Gate booth (#101). During the Vendor event on Wednesday evening, Gail Shaw and I will be there to discuss, and hand out copies of the book. Cheers, Tony.  

    Read the article

  • Oracle Tutor: Are Documented Policies and Procedures Necessary?

    - by emily.chorba(at)oracle.com
    People refer to policies and procedures with a variety of expressions including business process documentation, standard operating procedures (SOPs), department operating procedures (DOPs), work instructions, specifications, and so on. For our purpose here, policies and procedures mean a set of documents that describe an organization's policies (rules) for operation and the procedures (containing tasks performed by individuals) to fulfill the policies. When an organization documents policies and procedures properly, they can be the strategic link between an organization's vision and its daily operations. Policies and procedures are often necessary because of some external requirement, such as environmental compliance or other governmental regulations. One example of an external requirement would be the American Sarbanes-Oxley Act, requiring full openness in accounting practices. Here are a few other examples of business issues that necessitate writing policies and procedures: Operational needs -- policies and procedures ensure fundamental processes are performed in a consistent way that meets the organization's needs. Risk management -- policies and procedures are identified by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) as a control activity needed to manage risk. Continuous improvement -- Procedures can improve processes by building important internal communication practices. Compliance -- Well-defined and documented processes (i.e. procedures, training materials) along with records that demonstrate process capability can demonstrate an effective internal control system compliant with regulations and standards. In addition to helping with the above business issues, policies and procedures can support the basic needs of employees and management. Well documented and easy to access policies and procedures: allow employees to understand their roles and responsibilities within predefined limits and to stay on the accepted path indentified by the organization's management provide clarity to the reader when dealing with accountability issues or activities that are of critical importance allow management to guide operations without constant intervention allow managers to control events in advance and prevent employees from making costly mistakes Can you think of another way organizations can meet the above needs of management and their employees in place of documented Policies and Procedures? Probably not, but we would love your feedback on this question. And that my friends, is why documented policies and procedures are very necessary. Learn MoreFor more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Emily ChorbaPrinciple Product Manager Oracle Tutor & BPM

    Read the article

  • Adding nodes to MAAS server

    - by Yasith Tharindu
    I was able to install MAAS server using ubuntu 12.04. Then boot up nodes from he PXE. Then installed maas-precise-x86-64-commissioning through pxe. Now the installation is done. but im unable to commission with the MAAS server. It does not show it as a node and neither im unable to add it manually and end up with following error. Also what is the default username password for maas-precise-x86-64-commissioning. Im unable to login. This error when adding node manually. ERROR 2012-11-20 08:32:54,500 maas.maasserver ################################ Exception: timed out ################################ ERROR 2012-11-20 08:32:54,501 maas.maasserver Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 111, in get_response response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.7/dist-packages/django/views/decorators/vary.py", line 22, in inner_func response = func(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/piston/resource.py", line 166, in call result = self.error_handler(e, request, meth, em_format) File "/usr/lib/python2.7/dist-packages/piston/resource.py", line 164, in call result = meth(request, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/maasserver/api.py", line 251, in dispatcher self, request, request.method, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/maasserver/api.py", line 193, in perform_api_operation return method(handler, request, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/maasserver/api.py", line 493, in new node = create_node(request) File "/usr/lib/python2.7/dist-packages/maasserver/api.py", line 418, in create_node return form.save() File "/usr/lib/python2.7/dist-packages/maasserver/forms.py", line 234, in save node = super(NodeWithMACAddressesForm, self).save() File "/usr/lib/python2.7/dist-packages/django/forms/models.py", line 363, in save fail_message, commit, construct=False) File "/usr/lib/python2.7/dist-packages/django/forms/models.py", line 85, in save_instance instance.save() File "/usr/lib/python2.7/dist-packages/maasserver/models.py", line 114, in save return super(CommonInfo, self).save(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/django/db/models/base.py", line 460, in save self.save_base(using=using, force_insert=force_insert, force_update=force_update) File "/usr/lib/python2.7/dist-packages/django/db/models/base.py", line 570, in save_base created=(not record_exists), raw=raw, using=using) File "/usr/lib/python2.7/dist-packages/django/dispatch/dispatcher.py", line 172, in send response = receiver(signal=self, sender=sender, **named) File "/usr/lib/python2.7/dist-packages/maasserver/provisioning.py", line 485, in provision_post_save_Node profile, power_type, preseed_data) File "/usr/lib/python2.7/dist-packages/maasserver/provisioning.py", line 245, in call result = self.method(*args) 259,1 93% result = self.method(*args) File "/usr/lib/python2.7/xmlrpclib.py", line 1224, in call return self._send(self._name, args) File "/usr/lib/python2.7/xmlrpclib.py", line 1578, in _request verbose=self._verbose File "/usr/lib/python2.7/xmlrpclib.py", line 1264, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib/python2.7/xmlrpclib.py", line 1294, in single_request response = h.getresponse(buffering=True) File "/usr/lib/python2.7/httplib.py", line 1030, in getresponse response.begin() File "/usr/lib/python2.7/httplib.py", line 407, in begin version, status, reason = self._read_status() File "/usr/lib/python2.7/httplib.py", line 365, in _read_status line = self.fp.readline() File "/usr/lib/python2.7/socket.py", line 447, in readline data = self._sock.recv(self._rbufsize) timeout: timed out

    Read the article

  • Oracle 5th Annual Maintenance Summit - Orlando March 22-23, 2011

    - by stephen.slade(at)oracle.com
    It's not too late to register today or tomorrow for this exclusive 'Maintenance Professionals Only" event.  In 4 tracks, 27 customer and partner speakers will present case studies and success stories in these 'no-sell zone' sessions. The take-aways will be worth attending!This "2 in 1" event combines a Customer Showcase featuring Orlando Utilities Commission (OUC) and Maintenance Summit.  OUC - the local municipal utility providing residential, commercial, and industrial customers with clean, reliable, and affordable electric and water services - will open the event with their CIO as keynote speaker, and host tours of their fleet, facility, and power generation operations. Recognized as a green leader, OUC has been the most reliable power provider in Florida the past 9 years due, in large part, to the operational efficiencies of its plant and asset maintenance systems. This Summit will feature breakout session tracks for EBS, JD Edwards, PeopleSoft and Sustainability. Highlights include over 12 Oracle solution demo stations, over 25 interactive breakout sessions, pool-side networking reception with live band, partner exhibit pavilion and special appearance by Sean D. Tucker, Team Oracle Stunt-Pilot!  Dates:                   March 22-23, 2011 Location:             Orlando World Center Marriott, Orlando, Florida Evite:                     http://www.oracle.com/us/dm/h2fy11/65971-nafm10019768mpp191c003-oem-304204.html Highlights:          Keynotes, Oracle Expert Demo Stations, Interactive Breakout Sessions, Networking Reception, Partner Pavilion, Speakers Tracks:                 EBS, JDE, PSFT, Sustainability Tours:                  Orlando Utility Operations, Fleet and Facility Oracle Demo Stations:  Agile, AutoVue, Primavera, MOC/SSDM, Utilities, PIM, PDQ, UCM, On Demand, Business Accelerators, Facilities Work Management, EBS Enterprise Asset Management, PeopleSoft Maintenance Management, Technology, Hardware/Sun. Partner-Sponsors:   Viziya, Global PTM, MiPro, Asset Management Solutions, Venutureforth, Impac Services, EAM Master, LLC, Meridium

    Read the article

  • Second Edition of Regular Expressions Cookbook Has Been Published

    - by Jan Goyvaerts
    %COOKBOOKFRAME% The first edition of Regular Expressions Cookbook was published in May of 2009. It quickly became a bestseller, briefly holding the #1 spot in computer books on Amazon.com. It also had staying power. The ebook version was O’Reilly’s top seller during the whole year of 2010. So it’s no surprise that our editor at O’Reilly soon contacted us for a second edition. With Steven and I always being very busy, those plans were delayed until finally both of us found the time to update the book. Work started in January. Today you can buy your own copy of the second edition of Regular Expressions Cookbook. O’Reilly’s online shop sells the eBook in DRM-free ePub, Mobi, and PDF formats for $39.99 and the print version for $49.99. These are the list prices for the eBook and the print book. If you’re looking for a discount and free shipping of the print book, you can pre-order on one of the various Amazon sites. Deliveries should start soon. The discount rates differ and are subject to change. Amazon will also pay me an affiliate commission if you use one of these links, which pretty much doubles the income I get from the book. Amazon.com. Free shipping to the USA. Amazon.co.uk. Free shipping to the UK and Ireland. Amazon.fr. Free shipping to France, Monaco, Luxembourg, and Belgium. Amazon.de. Free shipping to Germany, Austria, Switzerland, Luxembourg, Liechtenstein, Belgium, and The Netherlands. If you don’t want to wait for the print book to arrive, the Kindle edition is already available for instant delivery. The Kindle edition works on Amazon’s Kindle hardware, and on PCs via Amazon’s Kindle software (free download). Amazon.com Amazon.co.uk Amazon.fr Amazon.de I’ll blog more about the book in the coming days and weeks with details about what’s new in the second edition.

    Read the article

  • xml to xsl transformation

    - by amirin
    Hi, I have a requirement to extract the uniquie values from the different nodes of the given xml using xsl transformation. XML FILE:- var IPClaimCausesNice = ""; function changeIPClaimCause(){ if(CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Diagnosis") { IPClaimCausesNice = "diagnosis" } if(CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Illness") { IPClaimCausesNice = "illness" } if(CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Accident") { IPClaimCausesNice = "accident" } return IPClaimCausesNice; } <section id="1903879316" name="Logos"> <fraglink id="605609862" resid="1235000151"> <argvalue name="CommJob"> <var name="CommJob" type="Th_1235001170_CommJob" /> </argvalue> </fraglink> </section> <section id="13483397" name="Address Block"> <fraglink id="563800610" resid="986000123"> <argvalue name="PersonInformation"> <var name="AddresseePersonInformation" type="Th_1235000929_PersonInformation" /> </argvalue> </fraglink> </section> <section id="1093480468" name="Details"> <fraglink id="460316501" resid="1195000163"> <argvalue name="currentDateTime"> <var name="getSystemVariables.getCurrentDate" type="date" /> </argvalue> <argvalue name="CommJob"> <var name="CommJob" type="Th_1235000929_CommJob" /> </argvalue> <argvalue name="ShowDOB" /> <argvalue name="ShowYourRef" /> <argvalue name="YourRefLabel" /> </fraglink> <fraglink id="1026044336" resid="1235000070"> <argvalue name="brandKey"> <var name="CommJob.commJobDetails.brandingKey" type="string" /> </argvalue> <argvalue name="brandSponsor"> <var name="CommJob.client.policy.policyDetails.brandSponsor" type="string" /> </argvalue> </fraglink> </section> <section id="2092948772" name="Important info"> <frag id="1180564368" name="frag" no-match="error" type="text"> <edition id="1178777425" name="Any" withdrawn="False"> <edition-content> <p style="bodyTableHeader" align="left" xml:space="preserve">Important information</p> <p style="body" xml:space="preserve">In accordance with the terms and conditions of your policy, your claim has been classified as a <iif><expression><script language="JavaScript">CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Diagnosis" || CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Ill health"</script><description>the IPClaimCause of the CommJob's client policy insurances Insurance Coverages IPCover Claim equals "Diagnosis" or the IPClaimCause of the CommJob's client policy insurances Insurance Coverages IPCover Claim equals "Ill health"sicknessCommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Accident"the IPClaimCause of the CommJob's client policy insurances Insurance Coverages IPCover Claim equals "Accident"accident. Please assist us Please quote your policy and claim numbers / when returning your forms. Your claim has been received Your Thank you for sending your Initial Claim Form which we received on . We are sorry to hear of your recent . Our assessmentThe attending doctor’s statement indicates you are claiming benefits as a result of , CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Diagnosis"the IPClaimCause of the CommJob's client policy Insurance Coverages first Coverage Claim equals "Diagnosis"which was diagnosedCommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Accident"the IPClaimCause of the CommJob's client policy Insurance Coverages first Coverage Claim equals "Accident"which first occuredCommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause == "Ill health"the IPClaimCause of the CommJob's client policy Insurance Coverages first Coverage Claim equals "Ill health"with symptoms commencing on . We note you ceased all work on and consulted your doctor IsNotMissing(CommJob.placeHolders.date.date5)the date5 of the CommJob's placeHolders date is not missingon this day also regarding your condition. Further information is required We’ve enclosed the following forms. Please complete these and return them to us so that we can continue assessing your claim. Progress claim form Attached questionnaire Authority to the Health Insurance Commission Medical authority <<other/ free format>> <<other/ free format>> Please be advised we’ve CommJob.placeHolders._boolean.boolean1 == truethe CommJob's placeHolders boolean is boolean1also requested the following information: Medical report from Dr Medicare history report from the Health Insurance Commission <<other/ free format>> <<other/ free format>> As your claim forms have been submitted months after you ceased work, our ability to properly assess your claim may have been prejudiced. In order to complete our assessment of your claim, the following information is required within 30 days of this letter: Reason for late lodgement of claim. Reason(s) you ceased work on (eg redundancy or due to medical condition). Name and contact details of all doctors and specialists you have consulted since you ceased work. Copies of any medical, radiology, pathology or other reports in your possession. Details of all treatment you have received since you ceased work. Whether you returned to work (paid or unpaid) in either a full-time or part-time capacity. If so, please provide the dates you worked, hours you worked, duties you performed and any income you received. Financial information for any other related entities (if applicable). If you are unable to supply the above information, please contact us by WriteText(FormatDateTime(DateAdd(getSystemVariables.getCurrentDate,"day",30),"dd MMMM yyyy")). To help in the ongoing assessment of your claim, you are required to be under the regular care and attendance of a medical practitioner. We’ve enclosed a Progress Claim Form which needs to be completed and returned to us by . False False False CC: Different nodes to pick the values:- 1. 2.path form 5. Values to be picked form the xml node and display in HTML is like CommJob.client.policy.Insurance.Coverages.Coverage[0].Claim.IPClaimCause CommJob.commJobDetails.stockType CommJob.commJobDetails.targetClient.targetClientName CommJob.client.policy.policyDetails.policyStatus CommJob.client.policy.policyDetails.productType CommJob.commJobDetails.targetClient.targetClientName ........etc can any one help me to provide the solution. This xsl transformation doesn't pick correctly only the values <xsl:template match="@*"/> Any help on this will great.

    Read the article

  • SQL Server 2005 database design - many-to-many relationships with hierarchy

    - by Remnant
    Note I have completely re-written my original post to better explain the issue I am trying to understand. I have tried to generalise the problem as much as possible. Also, my thanks to the original people who responded. Hopefully this post makes things a little clearer. Context In short, I am struggling to understand the best way to design a small scale database to handle (what I perceive to be) multiple many-to-many relationships. Imagine the following scenario for a company organisational structure: Textile Division Marketing Division | | ---------------------- ---------------------- | | | | HR Dept Finance Dept HR Dept Finance Dept | | | | ---------- ---------- ---------- --------- | | | | | | | | Payroll Hiring Audit Tax Payroll Hiring Audit Accounts | | | | | | | | Emps Emps Emps Emps Emps Emps Emps Emps NB: Emps denotes a list of employess that work in that area When I first started with this issue I made four separate tables: Divisions - Textile, Marketing (PK = DivisionID) Departments - HR, Finance (PK = DeptID) Functions - Payroll, Hiring, Audit, Tax, Accounts (PK = FunctionID) Employees - List of all Employees (PK = EmployeeID) The problem as I see it is that there are multiple many-to-many relationships i.e. many departments have many divisions and many functions have many departments. Question Giving the database structure above, suppose I wanted to do the following: Get all employees who work in the Payroll function of the Marketing Division To do this I need to be able to differentiate between the two Payroll departments but I am not sure how this can be done? I understand that I could build a 'Link / Junction' table between Departments and Functions so that I can retrieve which Functions are in which Departments. However, I would still need to differentiate the Division they belong to. Research Effort As you can see I am an abecedarian when it comes to database deisgn. I have spent the last two days resaerching this issue, traversing nested set models, adjacency models, reading that this issue is known not to be NP complete etc. I am sure there is a simple solution?

    Read the article

  • JBoss Developer Studio exploaded SAR/EAR/WAR deployment

    - by dr_hoppa
    I am new to JBoss DS and I could not figur out is how to make a folder from my project deployable, exploded SAR in my case. If I create a new server in the JBoss server view / JBoss AS perspective and select a SAR and make it a deployable then the SAR file can be deployed on the JBoss server. Another approch that I have tryed is to make the *-service.xml file as a deployable element, same procedure as with the SAR file, and to add the Eclipse project as a dependency to the classpath of the server 'Open launch configuration' / 'Classpath' after dooing this I got an error from JBoss telling me that the JBoss classes are not found. The compromise solution that I have found is to create a junction/symlink from the output of the Eclipse project 'bin' into the JBoss 'serve//deploy/xxx.sar' this is a temporary solution but will not scale in the future as multiple services must be created. What I would need: I would need to have the ability to make the exploded folder as 'mark deployable' and to be have the classes/*service.xml files loaded from there and to have hot-deployability for them. Any sugestions would help.

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • Finding the right terminology for a dictionary table

    - by Karl Forner
    My concern is about what I currently call "dictionary tables", that are database tables containing a list of controlled vocabulary. Let's use an example: Suppose you have a table User containing fields: user_id : primary key first_name last_name user_type_id : foreign key to the UserType table and another table UserType with just two fields: user_type_id : primary key name : the name/value of a particular type of user. For instance, the UserType table may contain (1, Administrator), (2, PowerUser), (3, Normal)... My question is: what is the canonical term for a table like UserType, that only contains a list of (dictinct) words. I want to publish some code that help managing this kind of tables, but first I have to name them ! Thanks for your help. Current state of thought: For now I feel Lookup Tables is a good term. It is also used with the same meaning in these posts: http://dbix-class.35028.n2.nabble.com/RFC-Component-for-Lookup-tables-td3504085.html http://tonyandrews.blogspot.de/2004/10/otlt-and-eav-two-big-design-mistakes.html Lookup Tables Best Practices: DB Tables... or Enumerations The only problem is that lookup table is also sometimes used to name a junction table.

    Read the article

  • Recommendations to handle development and deployment of php web apps using shared project code

    - by Exception e
    I am wondering what the best way (for a lone developer) is to develop a project that depends on code of other projects deploy the resulting project to the server I am planning to put my code in svn, and have shared code as a separate project. There are problems with svn:externals which I cannot fully estimate. I've read subversion:externals considered to be an anti-pattern, and How do you organize your version control repository, but there is one special thing with php-projects (and other interpreted source code): there is no final executable resulting from your libraries. External dependencies are thus always on raw source code. Ideally I really want to be able to develop simultaneously on one project and the projects it dependends on. Possible way: Check out a projects' dependency in a sub folder as a working copy of the trunk. Problems I foresee: When you want to deploy a project, you might want to freeze its dependencies, right? The dependency code should not end up as a duplicate in the projects repository, I think. *(update1: I additionally assume svn:ignore will pose problems if I cannot fall back on symlinks, see my comment) I am still looking for suggestions that do not require the use junction points. They are a sort of unsupported hack in winxp, which may break some programs* This leads me to the last part of the question (as one has influence on the other): how do you deploy apps whith such dependencies? I've looked into BuildOut for Python, but it seems to be tightly related to the python ecosystem (resolving and fetching python modules from the web etc). I am very eager to learn about your best practices.

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • How could I let Skydrive desktop sync to MicroSD in Windows 8 tablet?

    - by peSHIr
    I have a Samsung Slate 7 tablet with (now) Windows 8 on it. This machine has a 64 Gb SSD and I have a 64 Gb MicroSD card in it. I also have a Skydrive on my main Microsoft ID that contains about 45 Gb of content. With Windows and some development stuff installed, my Skydrive will not fit on the main drive of the tablet. (Besides, my idea was to keep data on the memory card anyway, to make it easier to repave the machine without data loss if need be.) My problem should now be clear: I want to install the Skydrive desktop app to sync my Skydrive to the MicroSD card. This is not possible, as Skydrive does not allow syncing files to removable drives. I have tried a number of things already, but none of them worked: Use the mklink command line tool to create a directory link/junction from a folder name on SSD to a folder on the MicroSD and then try to install Skydrive sync to the SSD link folder. Skydrive however still recognizes this as something it does not want to sync onto. The various different filter drivers mentioned on Agnipulse (including the Hitachi one) that should make windows see some or all of the removable drives in the system as fixed drives do not seem work on (64-bit) Windows 8: they either can't be installed, do nothing and/or cause Windows 8 to go into Automatic Repair mode when rebooting. The Lexar BootIt app seems to be meant to flip the relevant bit in the on-board drive controller of supported USB pen drives, but I tried it anyway. Of course it did nothing to how the MicroSD card was seen. I have now run out of ideas, it seems, and I was wondering if anyone here has a solution to let Windows 8 see the MicroSD memory card in my tablet as a fixed drive instead of removable drive, or some other way of getting the Skydrive desktop to sync my Skydrive data to that MicroSD card. And to be complete: this is not a duplicate question of this or this as those ask about getting USB drives multiple partitions to work on Windows XP. This question is specific about getting desktop Skydrive to sync to MicroSD card in Windows 8, which seems to be a question I have not seen on superuser so far.

    Read the article

  • Spaces and Parenthesis in windows PATH variable screws up batch files.

    - by NoName
    So, my path variable (System-Adv Settings-Env Vars-System-PATH) is set to: C:\Python26\Lib\site-packages\PyQt4\bin; %SystemRoot%\system32; %SystemRoot%; %SystemRoot%\System32\Wbem; %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\; C:\Python26\; C:\Python26\Scripts\; C:\cygwin\bin; "C:\PathWithSpaces\What_is_this_bullshit"; "C:\PathWithSpaces 1.5\What_is_this_bullshit_1.5"; "C:\PathWithSpaces (2.0)\What_is_this_bullshit_2.0"; "C:\Program Files (x86)\IronPython 2.6"; "C:\Program Files (x86)\Subversion\bin"; "C:\Program Files (x86)\Git\cmd"; "C:\Program Files (x86)\PuTTY"; "C:\Program Files (x86)\Mercurial"; Z:\droid\android-sdk-windows\tools; Although, obviously, without the newlines. Notice the lines containing PathWithSpaces - the first has no spaces, the second has a space, and the third has a space followed by a parenthesis. Now, notice the output of this batch file: C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\>vcvars32.bat C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin>"C:\Program Files (x86 )\Microsoft Visual Studio 9.0\Common7\Tools\vsvars32.bat" Setting environment for using Microsoft Visual Studio 2008 x86 tools. \What_is_this_bullshit_2.0";"C:\Program was unexpected at this time. C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin> set "PATH=C:\Pro gram Files\Microsoft SDKs\Windows\v6.0A\bin;C:\Python26\Lib\site-packages\PyQt4\ bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\ WindowsPowerShell\v1.0\;C:\Python26\;C:\Python26\Scripts\;C:\cygwin\bin;"C:\Path WithSpaces\What_is_this_bullshit";"C:\PathWithSpaces 1.5\What_is_this_bullshit_1 .5";"C:\PathWithSpaces (2.0)\What_is_this_bullshit_2.0";"C:\Program Files (x86)\ IronPython 2.6";"C:\Program Files (x86)\Subversion\bin";"C:\Program Files (x86)\ Git\cmd";"C:\Program Files (x86)\PuTTY";"C:\Program Files (x86)\Mercurial";Z:\dr oid\android-sdk-windows\tools;" or specifically the line: \What_is_this_bullshit_2.0";"C:\Program was unexpected at this time. So, what is this bullshit? Specifically: Directory in path that is properly escaped with quotes, but with no spaces = fine Directory in path that is properly escaped with quotes, and has spaces but no parenthesis = fine Directory in path that is properly escaped with quotes, and has spaces and has a parenthesis = ERROR Whats going on here? How can I fix this? I'll probably resort to a junction point to let my tools still work as workaround, but if you have any insight into this, please let me know :)

    Read the article

  • Bancassurers Seek IT Solutions to Support Distribution Model

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, attended the third annual Bancassurance Forum in Vienna last month. He reports that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. Vienna is at the crossroads between mature Western European markets, where bancassurance is now an established best practice, and more recently tapped Eastern European markets that offer the greatest growth potential. Attendance at the Bancassurance Forum was good, with 87 bancassurance attendees, most in very senior positions in the industry. The conference provided the chance for a lively discussion among bancassurers looking to keep abreast of the latest trends in one of Europe's most successful distribution models for insurance. Even under normal business conditions, there is a great demand for best practice sharing within the industry as there is no standard formula for success.  Each company has to chart its own course and choose the strategies for sales, products development and the structure of ownership that make sense for their business, and as soon as they get it right bancassurers need to adapt the mix to keep up with ever changing regulations, completion and economic conditions.  To optimize the overall relationship between banking and insurance for mutual benefit, a balance needs to be struck between potentially conflicting interests. The banking side of the house is looking for greater wallet share from its customers and the ability to increase profitability by bundling insurance products with higher margins - especially in light of the recent economic crisis, where margins for traditional banking products are low and completion high. The insurance side of the house seeks access to new customers through a complementary distribution channel that is efficient and cost effective. To make the relationship work, it is important that both sides of the same house forge strategic and long term relationships - irrespective of whether the underlying business model is supported by a distribution agreement, cross-ownership or other forms of capital structure. However, this third annual conference was not held under normal business conditions. The conference took place in challenging, yet interesting times. ING's forced spinoff of its insurance operations under pressure by the EU Commission and the troubling losses suffered by Allianz as a result of the Dresdner bank sale were fresh in everyone's mind. One year after markets crashed, there is now enough hindsight to better understand the implications for bancassurance and best practices that are emerging to deal with them. The loan-driven business that has been crucial to bancassurance up till now evaporated during the crisis, leaving bancassurers grappling with how to change their overall strategy from a loan-driven to a more diversified model.  Attendees came to the conference to learn what strategies were working - not only to cope with the market shift, but to take advantage of it as markets pick up. Over the course of 14 customer case studies and numerous analyst presentations, topical issues ranging from getting the business model right to the impact on capital structuring of Solvency II were debated openly. Many speakers alluded to the need to specifically design insurance products with the banking distribution channel in mind, which brings with it specific requirements such as a high degree of standardization to achieve efficiency and reduce training costs. Moreover, products must be engineered to suit end consumers who consider banks a one-stop shop. The importance of IT to the successful implementation of bancassurance strategies was a theme that surfaced regularly throughout the conference.  The cross-selling opportunity - that will ultimately determine the success or failure of any bancassurance model - can only be fully realized through a flexible IT architecture that enables banking and insurance processes to be integrated and presented to front-line staff through a common interface. However, the reality is that most bancassurers have legacy IT systems, which constrain the businesses' ability to implement new strategies to maintaining competitiveness in turbulent times. My colleague Glenn Lottering, who chaired the conference, believes that the primary opportunities for bancassurers to extract value from their IT infrastructure investments lie in distribution management, risk management with the advent of Solvency II, and achieving operational excellence. "Oracle is ideally suited to meet the needs of bancassurance," Glenn noted, "supplying market-leading software for both banking and insurance. Oracle provides adaptive systems that let customers easily integrate hybrid business processes from both worlds while leveraging existing IT infrastructure." Overall, the consensus at the conference was that the outlook for bancassurance in EMEA remains positive, despite changing market conditions that have led a number of bancassurers to re-examine their business models. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.    

    Read the article

  • Data Security Through Structure, Procedures, Policies, and Governance

    Security Structure and Procedures One of the easiest ways to implement security is through the use of structure, in particular the structure in which data is stored. The preferred method for this through the use of User Roles, these Roles allow for specific access to be granted based on what role a user plays in relation to the data that they are manipulating. Typical data access actions are defined by the CRUD Principle. CRUD Principle: Create New Data Read Existing Data Update Existing Data Delete Existing Data Based on the actions assigned to a role assigned, User can manipulate data as they need to preform daily business operations.  An example of this can be seen in a hospital where doctors have been assigned Create, Read, Update, and Delete access to their patient’s prescriptions so that a doctor can prescribe and adjust any existing prescriptions as necessary. However, a nurse will only have Read access on the patient’s prescriptions so that they will know what medicines to give to the patients. If you notice, they do not have access to prescribe new prescriptions, update or delete existing prescriptions because only the patient’s doctor has access to preform those actions. With User Roles comes responsibility, companies need to constantly monitor data access to ensure that the proper roles have the most appropriate access levels to ensure users are not exposed to inappropriate data.  In addition this also protects rouge employees from gaining access to critical business information that could be destroyed, altered or stolen. It is important that all data access is monitored because of this threat. Security Governance Current Data Governance laws regarding security Health Insurance Portability and Accountability Act (HIPAA) Sarbanes-Oxley Act Database Breach Notification Act The US Department of Health and Human Services defines HIIPAA as a Privacy Rule. This legislation protects the privacy of individually identifiable health information. Currently, HIPAA   sets the national standards for securing electronically protected health records. Additionally, its confidentiality provisions protect identifiable information being used to analyze patient safety events and improve patient safety. In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The Database Breach Notification Act requires companies, in the event of a data breach containing personally identifiable information, to notify all California residents whose information was stored on the compromised system at the time of the event, according to Gregory Manter. He further explains that this act is only California legislation. However, it does affect “any person or business that conducts business in California, and that owns or licenses computerized data that includes personal information,” regardless of where the compromised data is located.  This will force any business that maintains at least limited interactions with California residents will find themselves subject to the Act’s provisions. Security Policies All companies must work in accordance with the appropriate city, county, state, and federal laws. One way to ensure that a company is legally compliant is to enforce security policies that adhere to the appropriate legislation in their area or areas that they service. These types of polices need to be mandated by a company’s Security Officer. For smaller companies, these policies need to come from executives, Directors, and Owners.

    Read the article

  • Government Mandates and Programming Languages

    A recent SEC proposal (which, at over 600 pages, I havent read in any detail) includes the following: We are proposing to require the filing of a computer program (the waterfall computer program, as defined in the proposed rule) of the contractual cash flow provisions of the securities in the form of downloadable source code in Python, a commonly used computer programming language that is open source and interpretive. The computer program would be tagged in XML and required to be filed with the Commission as an exhibit. Under our proposal, the filed source code for the computer program, when downloaded and run (by loading it into an open Python session on the investors computer), would be required to allow the user to programmatically input information from the asset data file that we are proposing to require as described above. We believe that, with the waterfall computer program and the asset data file, investors would be better able to conduct their own evaluations of ABS and may be less likely to be dependent on the opinions of credit rating agencies. With respect to any registration statement on Form SF-1 (Section 239.44) or Form SF-3 (Section 239.45) relating to an offering of an asset-backed security that is required to comply with Item 1113(h) of Regulation AB, the Waterfall Computer Program (as defined in Item 1113(h)(1) of Regulation AB) must be written in the Python programming language and able to be downloaded and run on a local computer properly configured with a Python interpreter. The Waterfall Computer Program should be filed in the manner specified in the EDGAR Filer Manual. I dont see how it can be in investors best interests that the SEC demand a particular programming language be used for software related to investment data.  I have a feeling that investors who use computers at all already have software with which they are familiar, and that the vast majority of them are not running an open source scripting language on their machines to do their financial analysis.  In fact, I would wager that most of them are using tools like Excel, and if they really need to script anything, its being done with VBA in Excel. Now, Im not proposing that the SEC should require that the data be provided in Excel format with VBA scripts included so everyone can easily access the data (despite the fact that this would actually be pretty useful generally).  Rather, I think it is ill-advised for a government agency to make recommendations of this nature, period.  If the goal of the recommendation is to ensure that the way things work is codified in a transparent manner, than I can certainly respect that.  It seems to me that this could be accomplished without dictating the technology to use.  To wit: An Excel document could contain all of the data as well as the formulae necessary, and most likely would not require the end-user to install anything on their machine The SEC could simply create a calculator in the cloud such that any/all investors could use a single canonical web-based (or web service based) tool Millions of Java and .NET developers could write their own implementations You can read more about this issue, including the favorable position on it, on Jayanth Varmas blog. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Getting Help with 'SEPA' Questions

    - by MargaretW
    What is 'SEPA'? The Single Euro Payments Area (SEPA) is a self-regulatory initiative for the European banking industry championed by the European Commission (EC) and the European Central Bank (ECB). The aim of the SEPA initiative is to improve the efficiency of cross border payments and the economies of scale by developing common standards, procedures, and infrastructure. The SEPA territory currently consists of 33 European countries -- the 28 EU states, together with Iceland, Liechtenstein, Monaco, Norway and Switzerland. Part of that infrastructure includes two new SEPA instruments that were introduced in 2008: SEPA Credit Transfer (a Payables transaction in Oracle EBS) SEPA Core Direct Debit (a Receivables transaction in Oracle EBS) A SEPA Credit Transfer (SCT) is an outgoing payment instrument for the execution of credit transfers in Euro between customer payment accounts located in SEPA. SEPA Credit Transfers are executed on behalf of an Originator holding a payment account with an Originator Bank in favor of a Beneficiary holding a payment account at a Beneficiary Bank. In R12 of Oracle applications, the current SEPA credit transfer implementation is based on Version 5 of the "SEPA Credit Transfer Scheme Customer-To-Bank Implementation Guidelines" and the "SEPA Credit Transfer Scheme Rulebook" issued by European Payments Council (EPC). These guidelines define the rules to be applied to the UNIFI (ISO20022) XML message standards for the implementation of the SEPA Credit Transfers in the customer-to-bank space. This format is compliant with SEPA Credit Transfer version 6. A SEPA Core Direct Debit (SDD) is an incoming payment instrument used for making domestic and cross-border payments within the 33 countries of SEPA, wherein the debtor (payer) authorizes the creditor (payee) to collect the payment from his bank account. The payment can be a fixed amount like a mortgage payment, or variable amounts such as those of invoices. The "SEPA Core Direct Debit" scheme replaces various country-specific direct debit schemes currently prevailing within the SEPA zone. SDD is based on the ISO20022 XML messaging standards, version 5.0 of the "SEPA Core Direct Debit Scheme Rulebook", and "SEPA Direct Debit Core Scheme Customer-to-Bank Implementation Guidelines". This format is also compliant with SEPA Core Direct Debit version 6. EU Regulation #260/2012 established the technical and business requirements for both instruments in euro. The regulation is referred to as the "SEPA end-date regulation", and also defines the deadlines for the migration to the new SEPA instruments: Euro Member States: February 1, 2014 Non-Euro Member States: October 31, 2016. Oracle and SEPA Within the Oracle E-Business Suite of applications, Oracle Payables (AP), Oracle Receivables (AR), and Oracle Payments (IBY) provide SEPA transaction capabilities for the following releases, as noted: Release 11.5.10.x -  AP & AR Release 12.0.x - AP & AR & IBY Release 12.1.x - AP & AR & IBY Release 12.2.x - AP & AR & IBY Resources To assist our customers in migrating, using, and troubleshooting SEPA functionality, a number of resource documents related to SEPA are available on My Oracle Support (MOS), including: R11i: AP: White Paper - SEPA Credit Transfer V5 support in Oracle Payables, Doc ID 1404743.1R11i: AR: White Paper - SEPA Core Direct Debit v5.0 support in Oracle Receivables, Doc ID 1410159.1R12: IBY: White Paper - SEPA Credit Transfer v5 support in Oracle Payments, Doc ID 1404007.1R12: IBY: White Paper - SEPA Core Direct Debit v5 support in Oracle Payments, Doc ID 1420049.1R11i/R12: AP/AR/IBY: Get Help Setting Up, Using, and Troubleshooting SEPA Payments in Oracle, Doc ID 1594441.2R11i/R12: Single European Payments Area (SEPA) - UPDATES, Doc ID 1541718.1R11i/R12: FAQs for Single European Payments Area (SEPA), Doc ID 791226.1

    Read the article

  • Trouble with Google Finance API

    - by BANSAL MOHIT
    When i am trying to buy shares using google finance api i am getting an exception. Please help run: Enter user ID: **@gmail.com Enter user password: ** Enter transaction type: Buy Enter transaction date (yyyy-mm-dd): 2010-03-10 Enter number of shares (optional, e.g. 100.0): Enter price (optional, e.g. 141.14): 12.0 Enter commission (optional, e.g. 20.0): 23.0 Enter currency (optional, e.g. USD, EUR, JPY): USD Enter any notes: Notes Enter portfolio ID: 1 Enter ticker (EXCHANGE:SYMBOL): NASDAQ:INFY Inserting Entry at location: http://finance.google.com/finance/feeds/default/portfolios/1/positions/NASDAQ:INFY/transactions The server had a problem handling your request. com.google.gdata.util.ServiceForbiddenException: Forbidden Exception message unavailable at com.google.gdata.client.http.HttpGDataRequest.handleErrorResponse(HttpGDataRequest.java:561) at com.google.gdata.client.http.GoogleGDataRequest.handleErrorResponse(GoogleGDataRequest.java:563) at com.google.gdata.client.http.HttpGDataRequest.checkResponse(HttpGDataRequest.java:536) at com.google.gdata.client.http.HttpGDataRequest.execute(HttpGDataRequest.java:515) at com.google.gdata.client.http.GoogleGDataRequest.execute(GoogleGDataRequest.java:535) at com.google.gdata.client.Service.insert(Service.java:1347) at com.google.gdata.client.GoogleService.insert(GoogleService.java:599) at financetester.Main.insertTransactionEntry(Main.java:169) at financetester.Main.main(Main.java:81) BUILD SUCCESSFUL (total time: 1 minute 4 seconds)

    Read the article

  • What are your Programming Falacies/Myths?

    - by pms1969
    I recently started a new job and as is typical of all jobs, if you've left, you get blamed for everything. Not long after I started there was a change required for an app (web based) that we maintain, and it was quickly pointed out that the actual code for this site had been lost a long time ago, and the only changes we could make to the it were ones that required changes to mark-up [it was a pre-compiled site]. Being new, I needed a little help finding my way around the code, and enlisted the services of one of my colleagues. Made my changes, and then re-enlisted his help to deploy it. While prepping for the deployment (getting the app on the QA server) we discovered that there were actually 2 different, very similarly named, folders in our source repository. It transpired that for the last year or so, mark-up changes had been made to the site directly, and these were the only differences with the code in the slightly incorrectly named folder in source control. So we did have all the code, and can now properly support the site. This put me in mind of a trick we played on a junior programmer once in a previous job, where we told him he couldn't/shouldn't do a certain thing in code as this would likely bring the server to it's knees and cost the company thousands of pounds (a gag that last months :-). And another one in the first programming job I took on - the batch commission run was just going to crash once a month and there was nothing to be done about it, causing a call out, and call out compensation for the on-call guy (a bug I fixed as soon as I became the on-call guy - 2am call outs don't work for me). So I was wondering... What other programming fallacies/myths are out there that are worth sharing?

    Read the article

  • Display a summary of form element values before submit

    - by Dirty Bird Design
    Using jquery formtowizard.js with an ajax submit. I want the last step of the form to display a summary of all form fields that were filled out. I can get it to work in isolated test cases, but not in full use. Form <form id="Commission" method="post" action="PHP/CommissionsSubmit.php"> <fieldset id="Initial"> <legend>Enter Your Information</legend> <ul> <li> <label for="FName">First Name*</label><input type="text" name="FName" id="FName"> </li> //repeat many li's </ul> </fieldset> <fieldset> <legend>Second Step</legend> //more li's </fieldset> <fieldset> <legend>Confirmation</legend> <span id="CFName"></span> </fieldset> </form> the jquery to get "#CFName" value $('#FName').keyup(function() { $('#CFName').val($(this).val()); }); I can't get the value to appear in the span "#CFName"... Could this have to do with the "serialize" function or anything going on with my $ajax submit function? its happening before submit... Please help! I apologize, but I've gone back and forth with "#CFName" being a span and an input, using .val and .html respectively

    Read the article

  • How to Sync Any Folder With SkyDrive on Windows 8.1

    - by Chris Hoffman
    Before Windows 8.1, it was possible to sync any folder on your computer with SkyDrive using symbolic links. This method no longer works now that SkyDrive is baked into Windows 8.1, but there are other tricks you can use. Creating a symbolic link or directory junction inside your SkyDrive folder will give you an empty folder in your SkyDrive cloud storage. Confusingly, the files will appear inside the SkyDrive Modern app as if they were being synced, but they aren’t. The Solution With SkyDrive refusing to understand and accept symbolic links in its own folder, the best option is probably to use symbolic links anyway — but in reverse. For example, let’s say you have a program that automatically saves important data to a folder anywhere on your hard drive — whether it’s C:\Users\USER\Documents\, C:\Program\Data, or anywhere else. Rather than trying to trick SkyDrive into understanding a symbolic link, we could instead move the actual folder itself to SkyDrive and then use a symbolic link at the folder’s original location to trick the original program. This may not work for every single program out there. But it will likely work for most programs, which use standard Windows API calls to access folders and save files. We’re just flipping the old solution here — we can’t trick SkyDrive anymore, so let’s try to trick other programs instead. Moving a Folder and Creating a Symbolic Link First, ensure no program is using the external folder. For example, if it’s a program data or settings folder, close the program that’s using the folder. Next, simply move the folder to your SkyDrive folder. Right-click the external folder, select Cut, go to the SkyDrive folder, right-click and select Paste. The folder will now be located in the SkyDrive folder itself, so it will sync normally. Next, open a Command Prompt window as Administrator. Right-click the Start button on the taskbar or press Windows Key + X and select Command Prompt (Administrator) to open it. Run the following command to create a symbolic link at the original location of the folder: mklink /d “C:\Original\Folder\Location” “C:\Users\NAME\SkyDrive\FOLDERNAME\” Enter the correct paths for the exact location of the original folder and the current location of the folder in your SkyDrive. Windows will then create a symbolic link at the folder’s original location. Most programs should hopefully be tricked by this symbolic location, saving their files directly to SkyDrive. You can test this yourself. Put a file into the folder at its original location. It will be saved to SkyDrive and sync normally, appearing in your SkyDrive storage online. One downside here is that you won’t be able to save a file onto SkyDrive without it taking up space on the same hard drive SkyDrive is on. You won’t be able to scatter folders across multiple hard drives and sync them all. However, you could always change the location of the SkyDrive folder on Windows 8.1 and put it on a drive with a larger amount of free space. To do this, right-click the SkyDrive folder in File Explorer, select Properties, and use the options on the Location tab. You could even use Storage Spaces to combine the drives into one larger drive. Automatically Copy the Original Files to SkyDrive Another option would be to run a program that automatically copies files from another folder on your computer to your SkyDrive folder. For example, let’s say you want to sync copies of important log files that a program creates in a specific folder. You could use a program that allows you to schedule automatic folder-mirroring, configuring the program to regularly copy the contents of your log folder to your SkyDrive folder. This may be a useful alternative for some use cases, although it isn’t the same as standard syncing. You’ll end up with two copies of the files taking up space on your system, which won’t be ideal for large files. The files also won’t be instantly uploaded to your SkyDrive storage after they’re created, but only after the scheduled task runs. There are many options for this, including Microsoft’s own SyncToy, which continues to work on Windows 8. If you were using the symbolic link trick to automatically sync copies of PC game save files with SkyDrive, you could just install GameSave Manager. It can be configured to automatically create backup copies of your computer’s PC game save files on a schedule, saving them to SkyDrive where they’ll be synced and backed up online. SkyDrive support was completely rewritten for Windows 8.1, so it’s not surprising that this trick no longer works. The ability to use symbolic links in previous versions of SkyDrive was never officially supported, so it’s not surprising to see it break after a rewrite. None of the methods above are as convenient and quick as the old symbolic link method, but they’re the best we can do with the SkyDrive integration Microsoft has given us in Windows 8.1. It’s still possible to use symbolic links to easily sync other folders with competing cloud storage services like Dropbox and Google Drive, so you may want to consider switching away from SkyDrive if this feature is critical to you.     

    Read the article

  • PeopleSoft at Alliance 2012 Executive Forum

    - by John Webb
    Guest Posting From Rebekah Jackson This week I jointed over 4,800 Higher Ed and Public Sector customers and partners in Nashville at our annual Alliance conference.   I got lost easily in the hallways of the sprawling Gaylord Opryland Hotel. I carried the resort map with me, and I would still stand for several minutes at a very confusing junction, studying the map and the signage on the walls. Hallways led off in many directions, some with elevators going down here and stairs going up there. When I took a wrong turn I would instantly feel stuck, lose my bearings, and occasionally even have to send out a call for help.    It strikes me that the theme for the Executive Forum this year outlines a less tangible but equally disorienting set of challenges that our higher education customer’s CIOs are facing: Making Decisions at the Intersection of Business Value, Strategic Investment, and Enterprise Technology. The forces acting upon higher education institutions today are not neat, straight-forward decision points, where one can glance to the right, glance to the left, and then quickly choose the best course of action. The operational, technological, and strategic factors that must be considered are complex, interrelated, messy…and the stakes are high. Michael Horn, co-author of “Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns”, set the tone for the day. He introduced the model of disruptive innovation, which grew out of the research he and his colleagues have done on ‘Why Successful Organizations Fail’. Highly simplified, the pattern he shared is that things start out decentralized, take a leap to extreme centralization, and then experience progressive decentralization. Using computers as an example, we started with a slide rule, then developed the computer which centralized in the form of mainframes, and gradually decentralized to mini-computers, desktop computers, laptops, and now mobile devices. According to Michael, you have more computing power in your cell phone than existed on the planet 60 years ago, or was on the first rocket that went to the moon. Applying this pattern to Higher Education means the introduction of expensive and prestigious private universities, followed by the advent of state schools, then by community colleges, and now online education. Michael shared statistics that indicate 50% of students will be taking at least one on line course by 2014…and by some measures, that’s already the case today. The implication is that technology moves from being the backbone of the campus, the IT department’s domain, and pushes into the academic core of the institution. Innovative programs are underway at many schools like Bellevue and BYU Idaho, joined by startups and disruptive new players like the Khan Academy.   This presents both threat and opportunity for higher education institutions, and means that IT decisions cannot afford to be disconnected from the institution’s strategic plan. Subsequent sessions explored this theme.    Theo Bosnak, from Attain, discussed the model they use for assessing the complete picture of an institution’s financial health. Compounding the issue are the dramatic trends occurring in technology and the vendors that provide it. Ovum analyst Nicole Engelbert, shared her insights next and suggested that incremental changes are no longer an option, instead fundamental changes are affecting the landscape of enterprise technology in higher ed.    Nicole closed with her recommendation that institutions focus on the trends in higher education with an eye towards the strategic requirements and business value first. Technology then is the enabler.   The last presentation of the day was from Tom Fisher, Sr. Vice President of Cloud Services at Oracle. Tom runs the delivery arm of the Cloud Services group, and shared his thoughts candidly about his experiences with cloud deployments as well as key issues around managing costs and security in cloud deployments. Okay, we’ve covered a lot of ground at this point, from financials planning, business strategy, and cloud computing, with the possibility that half of the institutions in the US might not be around in their current form 10 years from now. Did I forget to mention that was raised in the morning session? Seems a little hard to believe, and yet Michael Horn made a compelling point. Apparently 100 years ago, 8 of the top 10 education institutions in the world were German. Today, the leading German school is ranked somewhere in the 40’s or 50’s. What will the landscape be 100 years from now? Will there be an institution from China, India, or Brazil in the top 10? As Nicole suggested, maybe US parents will be sending their children to schools overseas much sooner, faced with the ever-increasing costs of a US based education. Will corporations begin to view skill-based certification from an online provider as a viable alternative to a 4 year degree from an accredited institution, fundamentally altering the education industry as we know it?

    Read the article

< Previous Page | 5 6 7 8 9 10  | Next Page >