Search Results

Search found 1395 results on 56 pages for 'repo'.

Page 14/56 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Difference between VMWare tools?

    - by tore-
    I'm currently writing a module for puppet which installs VMWare tools to virtual nodes. I want to do this via yum and and yum-repo. VMWare have their own repo (http://packages.vmware.com/tools/esx/3.5latest/rhel5/x86_64/index.html) which I thought I could use, rather than creating my own. But then I noticed that their repo files is alot different than the rpm file used when installing VMWare Tools on the node, via the "Install/upgrade VMWare Tools" in vSphere. Does anyone know what the real difference is? Does anyone have any preferences?

    Read the article

  • Git for Application Settings

    - by devians
    I use a lot of tools at work and at home, and im constantly tweaking them in one location or the other. It's somewhat common practice for people to use Git to version their .vim, .vimrc, and other . files, since you can host your config files on github and have the share-ability and all the other advantages that implies. Being able to version and branch my configs sounds like a grand idea, since I'm always messing about with them. I'd like to discuss the best practice for doing this on a slightly wider scope. How would you implement it? Have your configfiles repo in ~/Library/Configs or similar, and symlink the appropriate files? How to handle preference files for Applications, ie iTerm2. These files are recreated every time, so you'd have to symlink 'backwards' and put a link in the repo? rather than symlinking to the repo, since it would just delete the symlink.

    Read the article

  • Gitolite and Gitlab - How the `www-data` user can checkout?

    - by mblaettermann
    I have just installed Gitolite and Gitlab and I am very happy with it. Everything works fine so far. I can create repos, push to them, clone them on other clients on the network. Great! But now I wanted to do some post-receive hooks. I.e. when I push to some repo, this repo should be checked out on the server in the /var/www/repos directory. I did this with Gitlabs Deploy Hooks and this Endpoint-Script. The problem is that the scripts are run under the user "www-data" which has no access to gitlab/gitolite. How do I change this? I need to be able to checkout repos with www-data user and using git@server/repo.git syntax.

    Read the article

  • Updating a script currently being ran by Task Scheduler on Windows

    - by orangechicken
    I have a scheduled task that runs a script on a ahem schedule ahem that updates a local git repo. This script is a file in this local git repo. Currently, what I'm seeing is that the script is ran, git complains that permissions are denied to write to file which actually results in the script being deleted! The next time the scheduled task runs the script file is now missing! How can I ensure that when I pull changes to this script from the repo that the file is actually updated?

    Read the article

  • dav_svn write access

    - by canavar
    Good day! I am configuring dav_svn and apache with ldap auth. What I want to do: allow anonymous READ access to repo allow write access to authenticated users Here comes my config: # Uncomment this to enable the repository DAV svn SVNPath /home/svn/ldap-test-repo AuthType Basic AuthName "LDAP-REPO Repository" AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthLDAPBindDN "cn=svn,ou=applications,dc=company,dc=net" AuthLDAPBindPassword "pass" AuthLDAPURL ldap://ldap.company.net:389/ou=Users,dc=company,dc=net?uid?sub?(objectClass=person) <Limit GET PROPFIND OPTIONS REPORT> Allow from all </Limit> <LimitExcept GET PROPFIND OPTIONS REPORT> Require ldap-group cn=group,ou=services,dc=company,dc=net </LimitExcept> But when I do a test this config doesn't work... I can do checkout without auth and commit without auth... What I am doing wrong? Thanks!

    Read the article

  • EF4 POCO WCF Serialization problems (no lazy loading, proxy/no proxy, circular references, etc)

    - by kdawg
    OK, I want to make sure I cover my situation and everything I've tried thoroughly. I'm pretty sure what I need/want can be done, but I haven't quite found the perfect combination for success. I'm utilizing Entity Framework 4 RTM and its POCO support. I'm looking to query for an entity (Config) that contains a many-to-many relationship with another entity (App). I turn off lazy loading and disable proxy creation for the context and explicitly load the navigation property (either through .Include() or .LoadProperty()). However, when the navigation property is loaded (that is, Apps is loaded for a given Config), the App objects that were loaded already contain references to the Configs that have been brought to memory. This creates a circular reference. Now I know the DataContractSerializer that WCF uses can handle circular references, by setting the preserveObjectReferences parameter to true. I've tried this with a couple of different attribute implementations I've found online. It is needed to prevent the "the object graph contains circular references and cannot be serialized" error. However, it doesn't prevent the serialization of the entire graph, back and forth between Config and App. If I invoke it via WcfTestClient.exe, I get a stackoverflow (ha!) exception from the client and I'm hosed. I get different results from different invocation environments (C# unit test with a local reference to the web service appears to work ok though I still can drill back and forth between Configs and Apps endlessly, but calling it from a coldfusion environment only returns the first Config in the list and errors out on the others.) My main goal is to have a serialized representation of the graph I explicitly load from EF (ie: list of Configs, each with their Apps, but no App back to Config navigation.) NOTE: I've also tried using the ProxyDataContractResolver technique and keeping the proxy creation enabled from my context. This blows up complaining about unknown types encountered. I read that the ProxyDataContractResolver didn't fully work in Beta2, but should work in RTM. For some reference, here is roughly how I'm querying the data in the service: var repo = BootStrapper.AppCtx["AppMeta.ConfigRepository"] as IRepository<Config>; repo.DisableLazyLoading(); repo.DisableProxyCreation(); //var temp2 = repo.Include(cfg => cfg.Apps).Where(cfg => cfg.Environment.Equals(environment)).ToArray(); var temp2 = repo.FindAll(cfg => cfg.Environment.Equals(environment)).ToArray(); foreach (var cfg in temp2) { repo.LoadProperty(cfg, c => c.Apps); } return temp2; I think the crux of my problem is when loading up navigation properties for POCO objects from Entity Framework 4, it prepopulates navigation properties for objects already in memory. This in turn hoses up the WCF serialization, despite every effort made to properly handle circular references. I know it's a lot of information, but it's really standing in my way of going forward with EF4/POCO in our system. I've found several articles and blogs touching upon these subjects, but for the life of me, I cannot resolve this issue. Feel free to simply ask questions and help me brainstorm this situation. PS: For the sake of being thorough, I am injecting the WCF services using the HEAD build of Spring.NET for the fix to Spring.ServiceModel.Activation.ServiceHostFactory. However I don't think this is the source of the problem.

    Read the article

  • HgWebDir push permission denied error

    - by Gregg
    I have a new Fedora 12 server that I am attempting to set up Mercurial on. I have yum installed mercurial, and most things seem to work fine. However, after setting up hgwebdir.cgi through apache, I am unable to do an hg push to the only repo currently being hosted. The error I get back is: searching for changes abort: HTTP Error 500: Permission denied: .hg/store/lock httpd is running as user apache UID PID PPID C STIME TTY TIME CMD root 1691 1 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1694 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1695 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1696 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1697 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1698 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1699 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1700 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1701 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd and I set permissions so that the apache user owns the whole repo and everything. In a last ditch attempt, I even made the repo globally writable. [root@builds .hg]# ll total 424K drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 . drwxrwxrwx. 19 apache apache 4.0K 2010-04-15 13:33 .. -rw-rw-rw-. 2 apache apache 57 2010-04-13 11:42 00changelog.i -rw-rw-rw-. 1 apache apache 93 2010-04-16 15:33 branchheads.cache -rw-rw-rw-. 1 apache apache 192K 2010-04-15 13:33 dirstate -rw-r--r--. 1 apache apache 156 2010-04-19 14:43 hgrc -rw-rw-rw-. 1 apache apache 42 2010-04-15 13:33 last-message.txt -rw-rw-rw-. 2 apache apache 23 2010-04-13 11:42 requires drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 store -rw-rw-rw-. 1 apache apache 45 2010-04-14 14:08 tags.cache -rw-rw-rw-. 1 apache apache 7 2010-04-16 15:33 undo.branch -rw-rw-rw-. 1 apache apache 192K 2010-04-16 15:33 undo.dirstate [root@builds .hg]# cd store [root@builds store]# ll total 308K drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 . drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 .. -rw-rw-rw-. 1 apache apache 20K 2010-04-16 15:33 00changelog.i -rw-rw-rw-. 1 apache apache 81K 2010-04-16 15:33 00manifest.i drwxrwxrwx. 17 apache apache 4.0K 2010-04-13 11:47 data drwxrwxrwx. 3 apache apache 4.0K 2010-04-13 11:43 dh -rw-rw-rw-. 2 apache apache 177K 2010-04-15 11:03 fncache -rw-rw-rw-. 1 apache apache 67 2010-04-16 15:33 undo I have a clone of the repo elsewhere on the machine running as a different user. If I set the the default value in the [paths] section of the clones hgrc file to the local filepath on the server, the push works fine, but if I switch it to use the url, I get the error every time. Some possible quirks in how I've set this up... hgwebdir.cgi is sitting in /var/www/cgi-bin and the repo is a child of /opt/hg. I turned off suexec as well, and this doesn't seem to clear up the issue. The only line I added in the apache config to get hgwebdir running is: ScriptAlias /hg "/var/www/cgi-bin/hgwebdir.cgi" The hgweb.config is also in /var/www/cgi-bin and it's contents are: [collections] /opt/hg = /opt/hg [trusted] users = * [web] baseurl = /hg push_ssl = false allow_push = * The repo browser is working fine, it's just push that doesn't work. Apache error_log doesn't have anything in about this error at all.

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • RUEI 12.1.0.3.0 dependency requirement for php-soap-5.1.6

    - by sthieme
    Dear Readers,please be aware of the new php-soap-5.1.6 dependency in RUEI 12.1.0.3.For a swift upgrade to RUEI 12.1.0.3 you should be aware of this pre-requisite as it can be a time-eater to obtain individual rpm-packages inside of a datacenter for an old OS revision once you have started the upgrade process. You may use the following procedure to retrieve the required package via http://public-yum.oracle.com:Customers will have to check the /etc/issue, /etc/issue.net (or /etc/redhat-release for RHEL based OS) for their current release in order to obtain the fitting package version.Customers of OEL can download the packages from our public-yum.oracle.com Server: http://public-yum.oracle.com/repo/,  e.g. http://public-yum.oracle.com/repo/OracleLinux/OL5/8/base/x86_64/php-soap-5.1.6-32.el5.x86_64.rpmEarlier releases (up to 5.5) are located under the EnterpriseLinux instead of OracleLinux path, e.g.http://public-yum.oracle.com/repo/EnterpriseLinux/EL5/5/base/x86_64/php-soap-5.1.6-27.el5.x86_64.rpmNote: you will have to obtain the relevant RedHat rpm-packages via the login protected RHN URLs. Oracle can only provide support for Oracle Enterprise Linux and RHEL packages are not available publicly via rpm-seek.com to my knowledge. Kind regards,Stefan

    Read the article

  • How to make the members of my Data Access Layer object aware of their siblings

    - by Graham
    My team currently has a project with a data access object composed like so: public abstract class DataProvider { public CustomerRepository CustomerRepo { get; private set; } public InvoiceRepository InvoiceRepo { get; private set; } public InventoryRepository InventoryRepo { get; private set; } // couple more like the above } We have non-abstract classes that inherit from DataProvider, and the type of "CustomerRepo" that gets instantiated is controlled by that child class. public class FloridaDataProvider { public FloridaDataProvider() { CustomerRepo = new FloridaCustomerRepo(); // derived from base CustomerRepository InvoiceRepo = new InvoiceRespository(); InventoryRepo = new InventoryRepository(); } } Our problem is that some of the methods inside a given repo really would benefit from having access to the other repo's. Like, a method inside InventoryRepository needs to get to Customer data to do some determinations, so I need to pass in a reference to a CustomerRepository object. Whats the best way for these "sibling" repos to be aware of each other and have the ability to call each other's methods as-needed? Virtually all the other repos would benefit from having the CustomerRepo, for example, because it is where names/phones/etc are selected from, and these data elements need to be added to the various objects that are returned out of the other repos. I can't just new-up a plain "CustomerRepository" object inside a method within a different repo, because it might not be the base CustomerRepository that actually needs to run.

    Read the article

  • Importing an existing project into Git

    - by Andy
    Background During the course of developing our site (ASP.NET), we discovered that our existing source control (SourceGear Vault) wasn't working for us. So, we decided to migrate to Git. The translation has been less than smooth though. Our site is broken up into three environments DEV, QA, and PROD. For tho most part, DEV and the source control repo have been in sync with each other. There is one branch in the repo, if a page was going to be moved up to QA then the file was moved manually, same thing with stuff that was ready for PROD. So, our current QA and PROD environments do not correspond to any particular commit in the master branch. Clarification: The QA and PROD branches are not currently, nor have they ever been in source control. The Question How do I move QA and PROD into Git? Should I forget about the history we've maintained up to this point and start over with a new repo? I could start with everything on PROD, then make a branch and pull in everything from QA, and then make another branch off of that with DEV. That way not only will the branches reflect the differences in the environments, they'll be in the right order chronologically with the newest commits in the DEV branch. What I've tried so far I thought about creating a QA branch off of the current master and using robocopy to make the working folder look like the current QA environment. This doesn't work because the new commit from QA will remove new files from DEV and that will remove them when we merge up, I suspect there will be similar problems if I started QA at an earlier (though not exact) commit from DEV.

    Read the article

  • Can I have a workspace that is both a git workspace and a svn workspace?

    - by Troy
    I have checked out now a local working copy of a codebase that lives in an svn repo. It's a big Java project that I use Eclipse to develop in. Eclipse of course builds everything on the fly, in it's own way with all the binaries ending up in [project root]/bin. That's perfectly fine with me, for development, but when the build runs on the build server, it looks quite a lot different (maven build, binaries end up in a different directory structure, etc). Sometimes I need to recreate the build server environment on my local development system to debug the build or what have you, so I usually end up downloading an entirely new working copy into a new workspace and running the build from there (prevents cluttering my development workspace with all the build artifacts and dirtying up the working copy). Of course sometimes I'm interested in running the full build on code that I don't want to check in yet, so I will manually copy over the "development" workspace onto the "build" workspace. Besides taking a lot of extra time copying a lot of files that I don't actually need (just overlaying the new over the old), this also screws up my svn metadata, meaning that I can't check in changes from that "build workspace" working copy, and I often end up having to re-download the code to get it back into a known state. So I'm thinking I make my svn working copy a local git repo, then "check out" the in-development code from the svn working copy/git master, into the local build workspace. Then I can build, revert my changes, have all the advantages of a version controlled working copy in the build workspace. Then if I need to make changes to the build, push those back into the git master (which is also a svn working copy), then check them into the main svn repo. |-------------| |main svn repo| <------- |---------------------| |-------------| |svn working copy | <------- |--------------------| | (svn dev workspace/ | | non-svn-versioned | | git master) | | build workspace | |---------------------| | (git working copy) | |--------------------| Just switching everything to git would obviously be better, but, big company, too many people using svn, too costly to change everything, etc. We're stuck with svn as the main repo for now. BTW, I know there is a maven plugin for Eclipse and everything, I'm mainly interested to know if there is a way to maintain a workspace that is both a git working copy and an svn working copy. Actually any distributed version control system would probably work (hg possibly?). Advice? How does everybody else handle this situation of having a to manage both a "development" build process and a "production" build process?

    Read the article

  • How to use crontab, .netrc, and git push?

    - by Jon
    Hi all, I am in the process of automating the backups from various servers to a central point then pushing those config changes into a git repo so i can track any changes over time. The rest of the scripts are working well, I can copy / rsync the files across the network to a central point. The last script is to get the config files to be put into / updated in repository. The script is as follows: #!/bin/bash clear SERVERNAME="betty" SCRIPTDIR="/home/jon" GITROOT="/tmp/git" TEMPROOT="/tmp/backups" BACKUPROOTDIR="/mnt/backups" echo " - running as user: $UID" echo "backingup git config on $SERVERNAME" echo "" # check to see if root backup folder exists, otherwise create it. if [ -d $GITROOT ]; then rm -rf $GITROOT fi mkdir $GITROOT cd $GITROOT echo " - testing if home is where I think it should be!" echo $HOME echo " - testing if it can see netrc" tail $HOME/.netrc git clone http://192.168.10.97:8000/repositories/HOH-config-backups.git cd HOH-config-backups echo " - copy Configuration Folders across" cp -r $BACKUPROOTDIR/Configuration/* $GITROOT/HOH-config-backups/ cp -r $BACKUPROOTDIR/scripts $GITROOT/HOH-config-backups/ git add . git commit -a -m "committing any new configuration changes!" git push origin master echo "" echo "Git repo updated" echo "" echo " - backing up this script" FIREWIGSCRIPTLOC="$BACKUPROOTDIR/scripts/$SERVERNAME" if [ ! -d $FIREWIGSCRIPTLOC ]; then mkdir $FIREWIGSCRIPTLOC fi cp /home/jon/gitConfig.sh $FIREWIGSCRIPTLOC The git repo is on a different machine in the network using Apache and HTTP-backend.exe (smart HTTP protocol). If I run this script as me "jon" it works. If I run it in crontab it fails. git uses the /home/jon/.netrc file for authentication: machine 192.168.10.97 login gitconfig password 1234579 The log from crontab is: TERM environment variable not set. - running as user: 1000 backingup git config on betty - testing if home is where I think it should be! /home/jon - testing if it can see netrc machine 192.168.10.97 login gitconfig password 1234579 got 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd walk 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd got be880f2d306778a538d592e7a02eb19f416612f7 got bd387e8def9f77aafa798bf53e80d949aba443e8 got 1bc1a59e12775841d4c59d77c63b8a73823138c2 walk bd387e8def9f77aafa798bf53e80d949aba443e8 Getting alternates list for http://192.168.10.97:8000/repositories/HOH-config-backups.git got 030512237bca72faf211e0e8ec2906164eac34f6 got 9bc2f575240bc1f61ff7d69777ce1a165d06b184 got b8400f7f01429104a9d4786a6bb1a16d293e37c1 got 2403b5bf611010e0b401f776f0e23b09ce744838 got 1a27944c48269ef3608a8f2466e43402d06faac0 got b686f45b7d57af4fa8ca0d528bb85216d6247e19 Getting pack list for http://192.168.10.97:8000/repositories/HOH-config-backups.git Getting index for pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff Getting pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff which contains ff84d6d48e9326066438d167a10251218d612b3d walk b686f45b7d57af4fa8ca0d528bb85216d6247e19 got 364e30daec17814073e668f490bb84af891fe1f7 got 23f6497e7f9b80e0d90adad73bd0407a0e5ac6ce got 9e77c47574b5e23ea669afe0c23ab235e4917ee1 got 6654e0d328a216b3783e98c47206cb2d01b3353d got 28821ffd437d2689ffb82c6e4b9c3f5372c95c4b got 8c384a24f645389e4d4b08013c79e9e73a658342 got d203be0123736ee025ce20c081f1489098648dfc got 1852603bf7709e71417d8ccec02390279d533642 got fb753a26b20b04694419fce8ecdaa8dbec105cf1 got 736028997cd84dd1c135f57e9d246674b9cd0b9d got 7af836249e20096d0476a548d5be702a071cdd4b got 240dc39d9db50df63073fc7927b2d002dfa0f54c got 93abd36e3935a01011eb753b635a1a0e984bf31e got c6269e28fecf4d8d0d98b9358aecb3acff02df44 got b0aa29432f73e64032682a351d436c24b14078ab walk 240dc39d9db50df63073fc7927b2d002dfa0f54c got 58fb66d9f35f8a5e32ff4683309c5f0c2a3a03c5 got 0da2def4de0565483cdbe6b87418ee2beb122e58 got 0f6a86c6f87ed52ad2ed01e5c6edd661d364930c got 437a93d27b5bb89c739a0564a34a616e832c3ebe got fe0385abe5c0acd8462268dac330bae00e934f1b got 24259f8f5c5c9ee974a75fe3d1e07c02e3e20fe9 got d29f624bf1a5eceedaa86c10fee35f62747c7d04 got 0154e4c987132585ea7a92b77d02dba285512d6b got eda8bf526567c25ee70addb2ad3c3c6aa57eac77 got 9f3d9d7262d66f9fa4f6a13b7c86199953f4bc4e got 8e20881e19667aa22245d0598646991067455a4d got abb1123145689b35eb19519952c71253ee45fa98 got dfeff593c79b4156ce2ce1adf043d0e80356488c got e20c5b48b1d360e0bcf34189e3f3d2bbf23e92cc got b13eb81cc274780322ecf786372320343926bec9 walk 8de83868b3fac748b0a55eba16c8f668ec852abb got b5961421bbc42afe7a07cc1c8b615aba26ba74d7 got 2650ba819019df4193b482733e29ca79b29f3f2c got b3111e1be8103e91803a97a817ed81f28025aca1 got b060be934d709684f5eb5dad3c03932a3589e864 got cf70d2043f081d7a4438e9d5a290a9f986c84060 got 80bf0f1cc836feab86d6935bb7968d8555a8d531 got da318d167920e34bc6573e4fc236249ccbbee316 got d82ac853d387b760149599e6e1ab96403f6ec672 got 0005f691d1f46550fdb4e56025f52e30a5b18cc2 Initialized empty Git repository in /tmp/git/HOH-config-backups/.git/ - copy Configuration Folders across Created commit 424df2f: committing any new configuration changes! 3 files changed, 55 insertions(+), 1 deletions(-) create mode 100755 scripts/betty/gitConfig.sh error: Cannot access URL http://192.168.10.97:8000/repositories/HOH-config-backups.git/, return code 22 error: failed to push some refs to 'http://192.168.10.97:8000/repositories/HOH-config-backups.git' Git repo updated - backing up this script cp: cannot create regular file `/mnt/backups/scripts/betty/gitConfig.sh': Permission denied my crontab is: # m h dom mon dow command 04 * * * * /home/jon/gitConfig.sh > /tmp/gitconfig.log 2>&1 I open it by doing: $crontab -e i.e. not as root. I am a bit confused as to why it is not running as my user (or what user id 1000 is). Not sure what I need to do to get the push with git to work within crontab. edit: found out about the userid: jon@betty:~$ id uid=1000(jon) gid=1000(jon) groups=4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),114(lpadmin),115(admin),1000(jon) here is my $HOME/.gitconfig file: [user] name = Jon Hawkins email = [email protected] Thanks

    Read the article

  • Implementing a modern web application with Web API on top of old services

    - by Gaui
    My company has many WCF services which may or may not be replaced in the near future. The old web application is written in WebForms and communicates straight with these services via SOAP and returns DataTables. Now I am designing a new modern web application in a modern style, an AngularJS client which communicates with an ASP.NET Web API via JSON. The Web API then communicates with the WCF services via SOAP. In the future I want to let the Web API handle all requests and go straight to the database, but because the business logic implemented in the WCF services is complicated it's going to take some time to rewrite and replace it. Now to the problem: I'm trying to make it easy in the near future to replace the WCF services with some other data storage, e.g. another endpoint, database or whatever. I also want to make it easy to unit test the business logic. That's why I have structured the Web API with a repository layer and a service layer. The repository layer has a straight communication with the data storage (WCF service, database, or whatever) and the service layer then uses the repository (Dependency Injection) to get the data. It doesn't care where it gets the data from. Later on I can be in control and structure the data returned from the data storage (DataTable to POCO) and be able to test the logic in the service layer with some mock repository (using Dependency Injection). Below is some code to explain where I'm going with this. But my question is, does this all make sense? Am I making this overly complicated and could this be simplified in any way possible? Does this simplicity make this too complicated to maintain? My main goal is to make it as easy as possible to switch to another data storage later on, e.g. an ORM and be able to test the logic in the service layer. And because the majority of the business logic is implemented in these WCF services (and they return DataTables), I want to be in control of the data and the structure returned to the client. Any advice is greatly appreciated. Update 20/08/14 I created a repository factory, so services would all share repositories. Now it's easy to mock a repository, add it to the factory and create a provider using that factory. Any advice is much appreciated. I want to know if I'm making things more complicated than they should be. So it looks like this: 1. Repository Factory public class RepositoryFactory { private Dictionary<Type, IServiceRepository> repositories; public RepositoryFactory() { this.repositories = new Dictionary<Type, IServiceRepository>(); } public void AddRepository<T>(IServiceRepository repo) where T : class { if (this.repositories.ContainsKey(typeof(T))) { this.repositories.Remove(typeof(T)); } this.repositories.Add(typeof(T), repo); } public dynamic GetRepository<T>() { if (this.repositories.ContainsKey(typeof(T))) { return this.repositories[typeof(T)]; } throw new RepositoryNotFoundException("No repository found for " + typeof(T).Name); } } I'm not very fond of dynamic but I don't know how to retrieve that repository otherwise. 2. Repository and service // Service repository interface // All repository interfaces extend this public interface IServiceRepository { } // Invoice repository interface // Makes it easy to mock the repository later on public interface IInvoiceServiceRepository : IServiceRepository { List<Invoice> GetInvoices(); } // Invoice repository // Connects to some data storage to retrieve invoices public class InvoiceServiceRepository : IInvoiceServiceRepository { public List<Invoice> GetInvoices() { // Get the invoices from somewhere // This could be a WCF, a database, or whatever using(InvoiceServiceClient proxy = new InvoiceServiceClient()) { return proxy.GetInvoices(); } } } // Invoice service // Service that handles talking to a real or a mock repository public class InvoiceService { // Repository factory RepositoryFactory repoFactory; // Default constructor // Default connects to the real repository public InvoiceService(RepositoryFactory repo) { repoFactory = repo; } // Service function that gets all invoices from some repository (mock or real) public List<Invoice> GetInvoices() { // Query the repository return repoFactory.GetRepository<IInvoiceServiceRepository>().GetInvoices(); } }

    Read the article

  • A class meant for an alfresco behavior and its bean, how do they work and how are they deployed trough eclipse

    - by MrHappy
    (This is a partial repost of a question asked 10 days ago because only 1 part was answered(not included), I've rewritten it into a way better question and added 3 more tags) where do I put the DeleteAsset.class or why isn't it being found? I've put the compiled class from the bin of the workspace of eclipse into alfresco-4.2.c/tomcat/webapps/alfresco/WEB-INF/classes/com/openerp/behavior/ and right now it's giving me Error loading class [com.openerp.behavior.DeleteAsset] for bean with name 'deletionBehavior' defined in URL [file:/home/openerp/alfresco-4.2.c/tomcat/shared/classes/alfresco/extension/cust??om-web-context.xml]: problem with class file or dependent class; nested exception is java.lang.NoClassDefFoundError: com/openerp/behavior/DeleteAsset (wrong name: DeleteAsset) when I put it in there. (See bean below!) The code(I'd trying to work without the model class, idk if I made any silly mistakes on that): package com.openerp.behavior; import java.util.List; import java.net.*; import java.io.*; import org.alfresco.repo.node.NodeServicePolicies; import org.alfresco.repo.policy.Behaviour; import org.alfresco.repo.policy.JavaBehaviour; import org.alfresco.repo.policy.PolicyComponent; import org.alfresco.repo.policy.Behaviour.NotificationFrequency; import org.alfresco.repo.security.authentication.AuthenticationUtil; import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork; import org.alfresco.service.cmr.repository.ChildAssociationRef; import org.alfresco.service.cmr.repository.NodeRef; import org.alfresco.service.cmr.repository.NodeService; import org.alfresco.service.namespace.NamespaceService; import org.alfresco.service.namespace.QName; import org.alfresco.service.transaction.TransactionService; import org.apache.log4j.Logger; //this is the newer version //import com.openerp.model.openerpJavaModel; public class DeleteAsset implements NodeServicePolicies.BeforeDeleteNodePolicy { private PolicyComponent policyComponent; private Behaviour beforeDeleteNode; private NodeService nodeService; public void init() { this.beforeDeleteNode = new JavaBehaviour(this,"beforeDeleteNode",NotificationFrequency.EVERY_EVENT); this.policyComponent.bindClassBehaviour(QName.createQName("http://www.someco.com/model/content/1.0","beforeDeleteNode"), QName.createQName("http://www.someco.com/model/content/1.0","sc:doc"), this.beforeDeleteNode); } public setNodeService(NodeService nodeService){ this.nodeService = nodeService; } @Override public void beforeDeleteNode(NodeRef node) { System.out.println("beforeDeleteNode!"); try { QName attachmentID1= QName.createQName("http://www.someco.com/model/content/1.0", "OpenERPattachmentID1"); // this could/shoul be defined in your OpenERPModel-class int attachmentid = (Integer)nodeService.getProperty(node, attachmentID1); //int attachmentid = 123; URL oracle = new URL("http://0.0.0.0:1885/delete/%20?attachmentid=" + attachmentid); URLConnection yc = oracle.openConnection(); BufferedReader in = new BufferedReader(new InputStreamReader( yc.getInputStream())); String inputLine; while ((inputLine = in.readLine()) != null) //System.out.println(inputLine); in.close(); } catch(Exception e) { e.printStackTrace(); } } } This is my full custom-web-context file: <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE beans PUBLIC '-//SPRING//DTD BEAN//EN' 'http://www.springframework.org/dtd/spring-beans.dtd'> <beans> <!-- Registration of new models --> <bean id="smartsolution.dictionaryBootstrap" parent="dictionaryModelBootstrap" depends-on="dictionaryBootstrap"> <property name="models"> <list> <value>alfresco/extension/scOpenERPModel.xml</value> </list> </property> </bean> <!-- deletion of attachments within openERP when delete is initiated in Alfresco--> <bean id="DeleteAsset" class="com.openerp.behavior.DeleteAsset" init-method="init"> <property name="NodeService"> <ref bean="NodeService" /> </property> <property name="PolicyComponent"> <ref bean="PolicyComponent" /> </property> </bean> and content type: <type name="sc:doc"> <title>OpenERP Document</title> <parent>cm:content</parent> There's also this when I open share An error has occured in the Share component: /share/service/components/dashlets/my-sites. It responded with a status of 500 - Internal Error. Error Code Information: 500 - An error inside the HTTP server which prevented it from fulfilling the request. Error Message: 09230001 Failed to execute script 'classpath*:alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js': 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 Server: Alfresco Spring WebScripts - v1.2.0 (Release 1207) schema 1,000 Time: Oct 23, 2013 11:40:06 AM Click here to view full technical information on the error. Exception: org.alfresco.error.AlfrescoRuntimeException - 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.alfresco.web.scripts.SingletonValueProcessorExtension.getSingletonValue(SingletonValueProcessorExtension.java:108) org.alfresco.web.scripts.SingletonValueProcessorExtension.getSingletonValue(SingletonValueProcessorExtension.java:59) org.alfresco.web.scripts.ImapServerStatus.getEnabled(ImapServerStatus.java:49) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:606) org.mozilla.javascript.MemberBox.invoke(MemberBox.java:155) org.mozilla.javascript.JavaMembers.get(JavaMembers.java:117) org.mozilla.javascript.NativeJavaObject.get(NativeJavaObject.java:113) org.mozilla.javascript.ScriptableObject.getProperty(ScriptableObject.java:1544) org.mozilla.javascript.ScriptRuntime.getObjectProp(ScriptRuntime.java:1375) org.mozilla.javascript.ScriptRuntime.getObjectProp(ScriptRuntime.java:1364) org.mozilla.javascript.gen.c6._c1(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js:4) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.optimizer.OptRuntime.callName0(OptRuntime.java:108) org.mozilla.javascript.gen.c6._c0(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js:51) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:393) org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:2834) org.mozilla.javascript.gen.c6.call(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.mozilla.javascript.gen.c6.exec(file:/opt/alfresco-4.2.c/tomcat/webapps/share/WEB-INF/classes/alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js) org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScriptImpl(JSScriptProcessor.java:318) org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScript(JSScriptProcessor.java:192) org.springframework.extensions.webscripts.AbstractWebScript.executeScript(AbstractWebScript.java:1305) org.springframework.extensions.webscripts.DeclarativeWebScript.execute(DeclarativeWebScript.java:86) org.springframework.extensions.webscripts.PresentationContainer.executeScript(PresentationContainer.java:70) org.springframework.extensions.webscripts.LocalWebScriptRuntimeContainer.executeScript(LocalWebScriptRuntimeContainer.java:240) org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:377) org.springframework.extensions.webscripts.AbstractRuntime.executeScript(AbstractRuntime.java:209) org.springframework.extensions.webscripts.WebScriptProcessor.executeBody(WebScriptProcessor.java:310) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.process(RenderService.java:599) org.springframework.extensions.surf.render.RenderService.renderSubComponent(RenderService.java:505) org.springframework.extensions.surf.render.RenderService.renderChromeInclude(RenderService.java:1284) org.springframework.extensions.directives.ChromeIncludeFreeMarkerDirective.execute(ChromeIncludeFreeMarkerDirective.java:81) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IfBlock.accept(IfBlock.java:82) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processRenderable(RenderService.java:204) org.springframework.extensions.surf.render.bean.ChromeRenderer.body(ChromeRenderer.java:95) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.ChromeRenderer.render(ChromeRenderer.java:86) org.springframework.extensions.surf.render.RenderService.processComponent(RenderService.java:432) org.springframework.extensions.surf.render.bean.ComponentRenderer.body(ComponentRenderer.java:94) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderComponent(RenderService.java:961) org.springframework.extensions.surf.render.RenderService.renderRegionComponents(RenderService.java:900) org.springframework.extensions.surf.render.RenderService.renderChromeInclude(RenderService.java:1263) org.springframework.extensions.directives.ChromeIncludeFreeMarkerDirective.execute(ChromeIncludeFreeMarkerDirective.java:81) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processRenderable(RenderService.java:204) org.springframework.extensions.surf.render.bean.ChromeRenderer.body(ChromeRenderer.java:95) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.ChromeRenderer.render(ChromeRenderer.java:86) org.springframework.extensions.surf.render.bean.RegionRenderer.body(RegionRenderer.java:99) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderRegion(RenderService.java:851) org.springframework.extensions.directives.RegionDirectiveData.render(RegionDirectiveData.java:91) org.springframework.extensions.surf.extensibility.impl.ExtensibilityModelImpl.merge(ExtensibilityModelImpl.java:408) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.merge(AbstractExtensibilityDirective.java:169) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.execute(AbstractExtensibilityDirective.java:137) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:179) freemarker.core.Environment.visit(Environment.java:428) freemarker.core.IteratorBlock.accept(IteratorBlock.java:102) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IteratorBlock$Context.runLoop(IteratorBlock.java:179) freemarker.core.Environment.visit(Environment.java:428) freemarker.core.IteratorBlock.accept(IteratorBlock.java:102) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.IfBlock.accept(IfBlock.java:82) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment$3.render(Environment.java:246) org.springframework.extensions.surf.extensibility.impl.DefaultExtensibilityDirectiveData.render(DefaultExtensibilityDirectiveData.java:119) org.springframework.extensions.surf.extensibility.impl.ExtensibilityModelImpl.merge(ExtensibilityModelImpl.java:408) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.merge(AbstractExtensibilityDirective.java:169) org.springframework.extensions.surf.extensibility.impl.AbstractExtensibilityDirective.execute(AbstractExtensibilityDirective.java:137) freemarker.core.Environment.visit(Environment.java:274) freemarker.core.UnifiedCall.accept(UnifiedCall.java:126) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.visit(Environment.java:406) freemarker.core.BodyInstruction.accept(BodyInstruction.java:93) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Macro$Context.runMacro(Macro.java:172) freemarker.core.Environment.visit(Environment.java:614) freemarker.core.UnifiedCall.accept(UnifiedCall.java:106) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.MixedContent.accept(MixedContent.java:92) freemarker.core.Environment.visit(Environment.java:221) freemarker.core.Environment.process(Environment.java:199) org.springframework.extensions.webscripts.processor.FTLTemplateProcessor.process(FTLTemplateProcessor.java:171) org.springframework.extensions.webscripts.WebTemplateProcessor.executeBody(WebTemplateProcessor.java:438) org.springframework.extensions.surf.render.AbstractProcessor.execute(AbstractProcessor.java:57) org.springframework.extensions.surf.render.RenderService.processTemplate(RenderService.java:721) org.springframework.extensions.surf.render.bean.TemplateInstanceRenderer.body(TemplateInstanceRenderer.java:140) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.bean.PageRenderer.body(PageRenderer.java:85) org.springframework.extensions.surf.render.AbstractRenderer.render(AbstractRenderer.java:77) org.springframework.extensions.surf.render.RenderService.renderPage(RenderService.java:762) org.springframework.extensions.surf.mvc.PageView.dispatchPage(PageView.java:411) org.springframework.extensions.surf.mvc.PageView.renderView(PageView.java:306) org.springframework.extensions.surf.mvc.AbstractWebFrameworkView.renderMergedOutputModel(AbstractWebFrameworkView.java:316) org.springframework.web.servlet.view.AbstractView.render(AbstractView.java:250) org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1047) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:817) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644) org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:549) javax.servlet.http.HttpServlet.service(HttpServlet.java:621) javax.servlet.http.HttpServlet.service(HttpServlet.java:722) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.alfresco.web.site.servlet.MTAuthenticationFilter.doFilter(MTAuthenticationFilter.java:74) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.alfresco.web.site.servlet.SSOAuthenticationFilter.doFilter(SSOAuthenticationFilter.java:374) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:168) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:929) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1002) org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:585) org.apache.tomcat.util.net.AprEndpoint$SocketWithOptionsProcessor.run(AprEndpoint.java:1771) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:724) Exception: org.springframework.extensions.webscripts.WebScriptException - 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScriptImpl(JSScriptProcessor.java:324) Exception: org.springframework.extensions.webscripts.WebScriptException - 09230001 Failed to execute script 'classpath*:alfresco/site-webscripts/org/alfresco/components/dashlets/my-sites.get.js': 09230000 09230001 Failed during processing of IMAP server status configuration from Alfresco: 09230000 Unable to retrieve IMAP server status from Alfresco: 404 org.springframework.extensions.webscripts.processor.JSScriptProcessor.executeScript(JSScriptProcessor.java:200) UPDATE: I think I've found the problem. Being a newbie to eclipse I haven't managed the dependecies well I think. Could anyone link me to a tutorial describing how to get org.alfresco.repo.node.NodeServicePolicies; as seen in import org.alfresco.repo.node.NodeServicePolicies; and other such imports into eclipse, I've got the alfresco source from svn but the tutorial I've found seems to fail me. java/lang/Error\00\F1Unresolved compilation problems: The declared package "com.openerp.behavior" does not match the expected package "java.com.openerp.behavior" The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.alfresco cannot be resolved The import org.apache cannot be resolved The import com.openerp cannot be resolved NodeServicePolicies cannot be resolved to a type PolicyComponent cannot be resolved to a type Behaviour cannot be resolved to a type NodeService cannot be resolved to a type Behaviour cannot be resolved to a type JavaBehaviour cannot be resolved to a type NotificationFrequency cannot be resolved to a variable PolicyComponent cannot be resolved to a type QName cannot be resolved QName cannot be resolved Behaviour cannot be resolved to a type Return type for the method is missing NodeService cannot be resolved to a type NodeService cannot be resolved to a type NodeRef cannot be resolved to a type QName cannot be resolved to a type QName cannot be resolved NodeService cannot be resolved to a type \00\00\00\00\00(Ljava/lang/String;)V\00LineNumberTable\00LocalVariableTable\00this\00'Ljava/com/openerp/behavior/DeleteAsset;\00init\008Unresolved compilation problems: Behaviour cannot be resolved to a type JavaBehaviour cannot be resolved to a type NotificationFrequency cannot be resolved to a variable PolicyComponent cannot be resolved to a type QName cannot be resolved QName cannot be resolved Behaviour cannot be resolved to a type \00(LNodeRef;)V\00\00\B0Unresolved compilation problems: NodeRef cannot be resolved to a type QName cannot be resolved to a type QName cannot be resolved NodeService cannot be resolved to a type

    Read the article

  • How to install php-devel under CentOS 6.3 x64?

    - by Jeremy Dicaire
    I'm trying to install php-devel on my CentOS 6.3 VPS and get a failed dependencies test. From phpinfos(): SYSTEM Linux 2.6.32-279.5.2.el6.x86_64 #1 x86_64 NTS error: Failed dependencies: php(x86-64) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.x86_64 I've tried the following RPM packages: php54w-devel-5.4.6-1.w6.x86_64.rpm php-devel-5.4.6-1.el6.remi.i686.rpm php-devel-5.4.6-1.el6.remi.x86_64.rpm One of the above package gave me this: root@sv1 [/tmp]# rpm -Uvh php-devel-5.4.6-1.el6.remi.i686.rpm warning: php-devel-5.4.6-1.el6.remi.i686.rpm: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY error: Failed dependencies: php(x86-32) = 5.4.6-1.el6.remi is needed by php-devel-5.4.6-1.el6.remi.i686 libbz2.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 libcom_err.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libcrypto.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libedit.so.0 is needed by php-devel-5.4.6-1.el6.remi.i686 libgmp.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libgssapi_krb5.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libk5crypto.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libkrb5.so.3 is needed by php-devel-5.4.6-1.el6.remi.i686 libncurses.so.5 is needed by php-devel-5.4.6-1.el6.remi.i686 libssl.so.10 is needed by php-devel-5.4.6-1.el6.remi.i686 libstdc++.so.6 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2 is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.4.30) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.5.2) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.0) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.11) is needed by php-devel-5.4.6-1.el6.remi.i686 libxml2.so.2(LIBXML2_2.6.5) is needed by php-devel-5.4.6-1.el6.remi.i686 libz.so.1 is needed by php-devel-5.4.6-1.el6.remi.i686 I don't know how to fix this error and download all the dependencies. Thank you. Edit 1 (for quanta): Here is "yum repolist": root@sv1 [/tmp]# yum repolist Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: mirror.atlanticmetro.net * epel: mirror.cogentco.com * extras: mirror.atlanticmetro.net * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.choopa.net repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,099 root@sv1 [/tmp]# rpm -qa | grep php didn't return any result. I forgot to mention I'm using cPanel/WHM Edit 2 after adding the Remi repo: >root@sv1 [/etc/yum.repos.d]# yum clean all Loaded plugins: fastestmirror, presto Cleaning repos: base epel extras remi remi-test rpmforge updates Cleaning up Everything Cleaning up list of fastest mirrors 1 delta-package files removed, by presto >root@sv1 [/etc/yum.repos.d]# yum repolist Loaded plugins: fastestmirror, presto Determining fastest mirrors epel/metalink | 12 kB 00:00 * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net base | 3.7 kB 00:00 base/primary_db | 4.5 MB 00:00 epel | 4.3 kB 00:00 epel/primary_db | 4.7 MB 00:00 extras | 3.0 kB 00:00 extras/primary_db | 6.3 kB 00:00 remi | 2.9 kB 00:00 remi/primary_db | 330 kB 00:00 remi-test | 2.9 kB 00:00 remi-test/primary_db | 85 kB 00:00 rpmforge | 1.9 kB 00:00 rpmforge/primary_db | 2.5 MB 00:00 updates | 3.5 kB 00:00 updates/primary_db | 2.3 MB 00:00 repo id repo name status base CentOS-6 - Base 5,980+366 epel Extra Packages for Enterprise Linux 6 - x86_64 6,493+1,272 extras CentOS-6 - Extras 4 remi Les RPM de remi pour Enterprise Linux 6 - x86_64 96+564 remi-test Les RPM de remi en test pour Enterprise Linux 6 - x86_64 25+139 rpmforge RHEL 6 - RPMforge.net - dag 2,123+2,310 updates CentOS-6 - Updates 499+29 repolist: 15,220 >root@sv1 [/etc/yum.repos.d]# yum install php-devel Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: centos.mirror.nac.net * epel: mirror.symnds.com * extras: centos.mirror.choopa.net * remi: remi-mirror.dedipower.com * remi-test: remi-mirror.dedipower.com * rpmforge: mirror.us.leaseweb.net * updates: centos.mirror.nac.net Setting up Install Process No package php-devel available. Error: Nothing to do >root@sv1 [/etc/yum.repos.d]#

    Read the article

  • LINQ To SQL ignore unique constraint exception and continue

    - by Martin
    I have a single table in a database called Users Users ------ ID (PK, Identity) Username (Unique Index) I have setup a unique index on the Username table to prevent duplicates. I am then enumerating through a collection and creating a new user in the database for each item. What I want to do is just insert a new user and ignore the exception if the unique key constraint is violated (as it's clearly a duplicate record in that case). This is to avoid having to craft where not exists kind of queries. First off, is this going to be any more efficient or should my insert code be checking for duplicates instead? I'm drawn more to the database having that logic as this prevents any other type of client from inserting duplicate data. My other issue is related to LINQ To SQL. I have the following code: public class TestRepo { DatabaseDataContext database = new DatabaseDataContext(); public void Add(string username) { database.Users.InsertOnSubmit(new User() { Username = username }); } public void Save() { database.SubmitChanges(); } } And then I iterate over a collection and insert new users, ignoring any exceptions: TestRepo repo = new TestRepo(); foreach (var name in new string[] { "Tim", "Bob", "John" }) { try { repo.Add(name); repo.Save(); } catch { } } The first time this is run, great I have three users in the table. If I remove the second one and run this code again, nothing is inserted. I expected the first insert to fail with the exception, the second to succeed (as I just removed that item from the DB) and the third to then fail. What seems to be happening is that once the SqlException is thrown (even though the loop continues to iterate) all of the next inserts fail - even when there isn't a row in the table that would cause a unique violation. Can anyone explain this? P.S. The only workaround I could find was to instantiate the repo each time before the insert, then it worked exactly as excepted - indicating that it's something to do with the LINQ To SQL DataContext. Thanks.

    Read the article

  • maven and lift using scala 2.8 : lift-mapper missing?

    - by Bjorn J
    Newbie question since I'm not up to speed using maven at all. I'm trying to use scala + lift using scala 2.8, environment is a win7 box if that matters. I create a basic project using: mvn archetype:generate -U -DarchetypeGroupId=net.liftweb -DarchetypeArtifactId=lift-archetype-basic -DarchetypeVersion=2.0-scala280-SNAPSHOT -DarchetypeRepository=http://scala-tools.org/repo-snapshots -DremoteRepositories=http://scala-tools.org/repo-snapshots -DgroupId=com.liftworkshop -DartifactId=todo -Dversion=1.0-SNAPSHOT So far so good, but then, I try to cd into my new project and do: mvn jetty:run I after quite a few downloads end up with a error like below: [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. Missing: ---------- 1) net.liftweb:lift-mapper:jar:2.0-scala280-SNAPSHOT Try downloading the file manually from the project website. Then, install it using the command: mvn install:install-file -DgroupId=net.liftweb -DartifactId=lift-mapper -D version=2.0-scala280-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file Alternatively, if you host your own repository you can deploy the file there: mvn deploy:deploy-file -DgroupId=net.liftweb -DartifactId=lift-mapper -Dve rsion=2.0-scala280-SNAPSHOT -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -Dr epositoryId=[id] Path to dependency: 1) com.liftworkshop:todo:war:1.0-SNAPSHOT 2) net.liftweb:lift-mapper:jar:2.0-scala280-SNAPSHOT ---------- 1 required artifact is missing. for artifact: com.liftworkshop:todo:war:1.0-SNAPSHOT from the specified remote repositories: scala-tools.snapshots (http://scala-tools.org/repo-snapshots), scala-tools.releases (http://scala-tools.org/repo-releases), central (http://repo1.maven.org/maven2) Any ideas?

    Read the article

  • Svn import with auto-props & pre-commit hook

    - by James Tisato
    My company's svn repo has a lot of MS Word docs in it. We've implemented a policy that all .doc files must have the svn:needs-lock property set to prevent parallel access on files that are hard to merge (we've also done this for xls, ppt, pdf etc.). We've implemented the policy by distributing a svn config with auto-props set appropriately for all relevant document types. We've also set up a pre-commit hook that checks that all added files of these types have the needs-lock property set (i.e. if they forget/are too lazy to update their svn config file, they won't be able to add any docs to the repo). The problem I'm having, however, is that the pre-commit hook fails when users try to import files into the repo, e.g. some users like to add files directly thru TortoiseSVN's Repo Browser, which effectively is an svn import. Through testing on other file types, I have seen that doing an import does in fact apply the auto-props listed in my config, but they don't seem to be applied at the point that the pre-commit hook runs. When importing .doc files, the hook fails, saying that the needs-lock property is missing. Is there really much difference between adding a single file to a working copy and committing it vs importing a file directly? Do we need to tailor our precommit hook in some way to cater for this scenario?

    Read the article

  • SubSonic generated code and always filtering records

    - by cmroanirgo
    Hi, I have a table called "Users" that has a column called "deleted", a boolean indicating that the user is "Deleted" from the system (without actually deleting it, of course). I also have a lot of tables that have a FK to the Users.user_id column. Subsonic generates (very nicely) the code for all the foreign keys in a similar manner: public IQueryable<person> user { get { var repo=user.GetRepo(); return from items in repo.GetAll() where items.user_id == _user_id select items; } } Whilst this is good and all, is there a way to generate the code in such a way to always filter out the "Deleted" users too? In the office here, the only suggestion we can think of is to use a partial class and extend it. This is obviously a pain when there are lots and lots of classes using the User table, not to mention the fact that it's easy to inadvertently use the wrong property (User vs ActiveUser in this example): public IQueryable<User> ActiveUser { get { var repo=User.GetRepo(); return from items in repo.GetAll() where items.user_id == _user_id and items.deleted == 0 select items; } } Any ideas?

    Read the article

  • How to manage sessions in NHibernate unit tests?

    - by Ben
    I am a little unsure as to how to manage sessions within my nunit test fixtures. In the following test fixture, I am testing a repository. My repository constructor takes in an ISession (since I will be using session per request in my web application). In my test fixture setup I configure NHibernate and build the session factory. In my test setup I create a clean SQLite database for each test executed. [TestFixture] public class SimpleRepository_Fixture { private static ISessionFactory _sessionFactory; private static Configuration _configuration; [TestFixtureSetUp] // called before any tests in fixture are executed public void TestFixtureSetUp() { _configuration = new Configuration(); _configuration.Configure(); _configuration.AddAssembly(typeof(SimpleObject).Assembly); _sessionFactory = _configuration.BuildSessionFactory(); } [SetUp] // called before each test method is called public void SetupContext() { new SchemaExport(_configuration).Execute(true, true, false); } [Test] public void Can_add_new_simpleobject() { var simpleObject = new SimpleObject() { Name = "Object 1" }; using (var session = _sessionFactory.OpenSession()) { var repo = new SimpleObjectRepository(session); repo.Save(simpleObject); } using (var session =_sessionFactory.OpenSession()) { var repo = new SimpleObjectRepository(session); var fromDb = repo.GetById(simpleObject.Id); Assert.IsNotNull(fromDb); Assert.AreNotSame(simpleObject, fromDb); Assert.AreEqual(simpleObject.Name, fromDb.Name); } } } Is this a good approach or should I be handling the sessions differently? Thanks Ben

    Read the article

  • Eclipse - Import existing mult-rep CVS project folder

    - by iQ
    Hey guys, Wondering if anyone can help me out with eclipse in terms of importing an existing CVS managed project. I am currently trying to shift my work on to the eclipse IDE. Some details about my project and environment below. I'm working in Linux Ubuntu, the project folder is located on a mounted shared network drive, I have installed the "Eclipse CVS Client" plug-in for my version of eclipse (helios). I've tried many ways for eclipse to use my existing folder as a project and recognize the CVS data in the CVS folders. I have done the following options: Created a new project, selected existing source, located my project folder and clicked OK to finish creating. In the end the CVS files weren't automatically read. Did the same as above and after project creation I wen to the option "project menu-team-share project", it asks me to choose a repository and doesn't automatically find the CVS information in the subfolders. If your wondering I have set-up both repositories in my eclipse and can browse the repositories through the CVS browser. My project directory layout is like this: +-Project Folder (no CVS folder at this level) +---Repo A folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo A + +---Repo B folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo B + +-(couple of random files, not in CVS) Thanks for the help

    Read the article

  • Can I keep git from pushing the master branch to all remotes by default?

    - by Curtis
    I have a local git repository with two remotes ('origin' is for internal development, and 'other' is for an external contractor to use). The master branch in my local repository tracks the master in 'origin', which is correct. I also have a branch 'external' which tracks the master in 'other'. The problem I have now is that my master brach ALSO wants to push to the master in 'other' as well, which is an issue. Is there any way I can specify that the local master should NOT push to other/master? I've already tried updating my .git/config file to include: [branch "master"] remote = origin merge = refs/heads/master [branch "external"] remote = other merge = refs/heads/master [push] default = upstream But remote show still shows that my master is pushing to both remotes: toko:engine cmlacy$ git remote show origin Password: * remote origin Fetch URL: <REPO LOCATION> Push URL: <REPO LOCATION> HEAD branch: master Remote branches: master tracked refresh-hook tracked Local branch configured for 'git pull': master merges with remote master Local ref configured for 'git push': master pushes to master (up to date) Those are all correct. toko:engine cmlacy$ git remote show other Password: * remote other Fetch URL: <REPO LOCATION> Push URL: <REPO LOCATION> HEAD branch: master Remote branch: master tracked Local branch configured for 'git pull': external merges with remote master Local ref configured for 'git push': master pushes to master (local out of date) That last section is the problem. 'external' should merge with other/master, but master should NEVER push to other/master. It's never gong to work.

    Read the article

  • git contributors not showing up properly in github/etc.

    - by RobH
    I'm working in a team on a big project, but when I'm doing the merges I'd like the developers name to appear in github as the author -- currently, I'm the only one showing up since I'm merging. Context: There are 4 developers, and we're using the "integration manager" workflow using GitHub. Our "blessed" repo is under the organization, and each developer manages their pub/private repo. I've been tasked with being the integration manager, so I'm doing the merges, etc. Where I could be messing up is that I'm basically working out of my rob/project.git instead of the org/project.git -- so when I do local merges I operate on my repo then I push to both my public and the org public. (Make sense?) When I push to the blessed repo nobody else shows up as an author, since all commits are coming from me -- how can I get around this? -- Also, we all forked org/project.git, yet in the network graph nobody is showing up -- did we mess this up too? I'm used to working with git solo and don't have too much experience with handling a team of devs. Merging seems like the right thing to do, but I'm being thrown off since GitHub is kind of ignoring the other contributors. If this makes no sense at all, how do you use GitHub to manage a single project across 4 developers? (preferably the integration mgr workflow, branching i think would solve the problem) Thanks for any help

    Read the article

  • Mercurial "server"

    - by user85116
    I've been using mercurial for a little while, but mainly for my own usage. Now though, I have a project I'm working on where two of us are building the same project, and we will probably be modifiying each other's files. I would like to setup a mercurial repo on a server, make that repo the "server", so my changes and the other editor's changes both push to that server (so basically the subversion / cvs model); I like mercurial though, and don't want to switch to something like subversion. Here in my own network, everything is done on linux, and my "server" has openssh installed. So pushing my changes (I work on multiple computers) from one computer to the server is just a matter of "hg push"; the protocol used is ssh for transfering the changes. The problem is that I use linux, the server will be windows (so no openssh, right?) and the other editor will be using windows too. As far as I know, the best way of working in mercurial in these types of setups is for the repo to pull changes from the source, rather then the source pushing to the "server". I'm behind several firewall's (not entirely my network) and my computer won't be visible from the server, and I'm assuming the other editor will be behind a firewall too (so we can't just start up the local mercurial http server and get the "server" computer to pull from that). What's the best way for both editors to get our changes to the server repo? (I should add that the server is a server on the internet, so just as visible as something like google.com. It's a hosted windows server, but I would probably have permission to install software if needed for this)

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >