Search Results

Search found 43847 results on 1754 pages for 'command line arguments'.

Page 469/1754 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • sudo ENV_KEEP not always preserving

    - by mafro
    When I run sudo -s, my environment is preserved. However, when running a simple sudo <command> it appears not to be preserved. The contents of my sudoers file: mafro@ip-10-xx-xx-250:~ > sudo cat /etc/sudoers.d/mafro Defaults env_reset Defaults env_keep += "HOME" mafro ALL=(ALL) NOPASSWD:ALL Using sudo -s, the ll alias is available: mafro@ip-10-xx-xx-250:~ > sudo -s root@ip-10-xx-xx-250:~ > ll total 8K drwxrwxr-x 2 mafro dev 4.0K Jun 9 23:59 bin drwxr-xr-x 20 mafro dev 4.0K Jun 9 23:59 dotfiles Using straight sudo, it is not: mafro@ip-10-xx-xx-250:~ > sudo ll sudo: ll: command not found What is happening here?

    Read the article

  • TeamCity NuGet private feed - Credentials

    - by Gaui
    I installed TeamCity and enabled NuGet server, both Authenticated Feed and Public Feed. When I try to push packages to the server with the following command: > nuget push package.nupkg [API-Key-here] -s http://myserver/httpAuth/app/nuget/v1/FeedService.svc/ I get the following prompt: Please provide credentials for: http://myserver/httpAuth/app/nuget/v1/FeedService.svc/ And asks me for both "UserName" and "Password". I've tried entering credentials for TeamCity administrator and Windows administrator, but nothing works. So I tried pushing to the Public Feed with the following command: > nuget push package.nupkg [API-Key-here] -s http://myserver/guestAuth/app/nuget/v1/FeedService.svc/ Then I get the following: Failed to process request. 'Method Not Allowed'. The remote server returned an error: (405) Method Not Allowed.. Regarding the Authenticated Feed, what credentials are they and where do I specify them and why is the Public Feed not working?

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Spring Roo Database Reverse Engineer with Oracle

    - by kerry
    So you are trying to reverse engineer an Oracle database with roo? Unfortunately, due to licensing restrictions with the Oracle JDBC Drivers, this is a little difficult. There are a few blog posts and forum threads that address the problem but I figured I would post what worked for me here. First, you need to download the appropriate Oracle Drivers from Oracle. The required login, stringent password requirements, nosy registration form, and general system instability made this a pretty painful step for me. I’d also like to say that companies that have password requirements that don’t allow symbols (or any other non-standard requirement) have a special place in my heart. Having to recover my password every time I go to your site virtually guarantees I will only go there when I absolutely have to (not often). Anyways, once you have it downloaded you need to install is with maven: mvn install:install-file -Dfile=~/Downloads/ojdbc6.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -DgeneratePom=true Here comes the fun part. You need to create an osgi wrapper for the driver to install it in roo. Otherwise, roo cannot see the driver. Create a new folder and put the contents of the oracle roo addon pom gist I created. Now build it with maven. You may want to change some of the artifact ids and dependencies for your particular situation. mvn package No open a roo shell and execute the following command: osgi install --url file:///Users/me/my-osgi-project/target/the-jar-it-built.jar Now run (in roo): jpa setup --provider HIBERNATE --database ORACLE dependency remove --groupId com.oracle --artifactId ojdbc14 --version 10.2.0.2 dependency add --groupId com.oracle --artifactId ojdbc6 --version 11.2.0.3 database properties set --key database.driverClassName --value oracle.jdbc.OracleDriver database properties set --key database.url --value jdbc:oracle:thin:@%YOUR_CONNECTION_INFO% database properties set --key database.username --value %YOUR_USERNAME% database properties set --key database.password --value %YOUR_PASSWORD% database reverse engineer --schema %YOUR_SCHEMA% --package ~.domain If you have any package loading exceptions when running the reverse engineer command you can uninstall the osgi bundle, set the package to optional in the osgi pom in the IncludedPackages tag (javax.some.package.*;resolution:=optional) rebuild, then reinstall in roo.

    Read the article

  • Oracle Expert Live Virtual Seminars - Learn the tricks that only the expert know

    - by rituchhibber
    Oracle University Expert Seminars are exclusive events delivered by top Oracle experts with years of experience in working with Oracle products.         Introduction into ADF & BPM with Markus Grünewald - 11-12 December 2012 ADF/WebCenter 11g Development in Depth with Andrejus Baranovskis - 13-14 December 2012 Beating the Optimizer with Jonathan Lewis - Online - 17 January 2013 RAC Performance Tuning On-Line with Arup Nanda - 25 January 2013 Mastering Oracle Parallel Execution with Randolf Geist - 30 January 2013 Minimize Downtime with Rolling Upgrade using Data Guard with Uwe Hesse - 8 February 2013 For a full list of Oracle Expert Seminars near you or on line click here. Remember that your OPN discount is applied to the standard prices shown on the website.For more information, assistance in booking and to request new dates, contact your local Oracle University Service Desk.

    Read the article

  • How to get Broadcom BCM 43XX Wireless card working

    - by Fer1805
    NOTE - Although this question is for a specific version of Ubuntu and a specific Broadcom model, the answer to it is for most Broadcom models and versions of Ubuntu 11.04, 11.10, 12.04 and 12.10. If you have come here from another question, feel free to read the answer instead. The answer covers many problems with several solutions. I'm having serious problems installing the Broadcom drivers for Ubuntu 11.04. It worked perfectly on my previous version, but now, it is impossible. I'm a user with no advance knowledge in Linux, so I would need clear explanations on make, compile, etc. I was following the instructions on the following blog, with no luck. How can I get Broadcom BCM4311 Wireless working? Can someone help me? Edit: For the command: "lspci | grep Network", I get the following message: 06:00.0 Network controller: Broadcom Corporation BCM4311 802.11b/g WLAN (rev 01) For the command: iwconfig, i get the following: lo no wireless extensions. eth0 no wireless extensions. When i follow the following steps (from the above link), there are a NO error message at all: open the 'Synaptic Package Manager' and search for bcm uninstall the bcm-kernel-source package make sure that the firmware-b43-installer and the b43-fwcutter packages are installed type into terminal: cat /etc/modprobe.d/* | egrep '8180|acx|at76|ath|b43|bcm|CX|eth|ipw|irmware|isl|lbtf|orinoco|ndiswrapper|NPE|p54|prism|rtl|rt2|rt3|rt6|rt7|witch|wl' (you may want to copy this) and see if the term blacklist bcm43xx is there if it is, type cd /etc/modprobe.d/ and then sudo gedit blacklist.conf put a # in front of the line: blacklist bcm43xx then save the file (I was getting error messages in the terminal about not being able to save, but it actually did save properly). reboot 'End of procedure' Before (not ubuntu 11.04), if i wanted to connect wireles, i just went to the icon at the upper side of the screen, click, showed ALL the wireless network available, and done. Now, the only options i see are: Wired Network Auto Eth0 Disconnect VPN Enable networking Connection information Edit connection. hope above info is enough for your help.

    Read the article

  • What are tangible advantages to proper Unit Tests over Functional Test called unit tests

    - by Jackie
    A project I am working on has a bunch of legacy tests that were not properly mocked out. Because of this the only dependency it has is EasyMock, which doesn't support statics, constructors with arguments, etc. The tests instead rely on database connections and such to "run" the tests. Adding powermock to handle these cases is being shot down as cost prohibitive due to the need to upgrade the existing project to support it (Another discussion). My questions are, what are the REAL world tangible benifits of proper unit testing I can use to push back? Are there any? Am I just being a stickler by saying that bad unit tests (even if they work) are bad? Is code coverage just as effective?

    Read the article

  • How can I find a position between 4 vertices in a fragment shader?

    - by c4sh
    I'm creating a shader with SharpDX (DirectX11 in C#) that takes a segment (2 points) from the output of a Vertex Shader and then passes them to a Geometry Shader, which converts this line into a rectangle (4 points) and assigns the four corners a texture coordinate. After that I want a Fragment Shader (which recieves the interpolated position and the interpolated texture coordinates) that checks the depth at the "spine of the rectangle" (that is, in the line that passes through the middle of the rectangle. The problem is I don't know how to extract the position of the corresponding fragment at the spine of the rectangle. This happens because I have the texture coordinates interpolated, but I don't know how to use them to get the fragment I want, because the coordinate system of a) the texture and b) the position of my fragment in screen space are not the same. Thanks a lot for any help.

    Read the article

  • Specify a custom dictionary for FxCop and Visual Studio source analysis

    - by Marko Apfel
    Renaming the default custom dictionary from CustomDictionary.xml to an other name – for instance FxCop.CustomDictionary.xml needs some additional changes to work in involved applications. Visual Studio Team System code analysis For Visual Studio Team System code analysis this file should be added as a link to all projects and setted to be the Build Action CodeAnalysisDirectory. Build target In a build target the command line tool FxCopCmd should be called with the /dictionary parameter: <Target Name="FxCop"> <Exec Command="&quot;$(ProjectDir)..\..\build\FxCop\FxCopCmd.exe&quot; /file:&quot;$(TargetPath)&quot; /project:&quot;$(ProjectDir)..\EsriDE.SfgPraxair.FxCop&quot; /directory:&quot;$(ProjectDir)..\..\lib\Esri.ArcGIS&quot; /directory:&quot;$(ProjectDir)..\..\lib\Microsoft&quot; /dictionary:&quot;$(ProjectDir)..\FxCop.CustomDictionary.xml&quot; /out:&quot;$(OutDir)..\$(ProjectName).FxCopReport.xml&quot; /console /forceoutput /ignoregeneratedcode"> </Exec> <Message Text="FxCop finished." /> </Target> FxCop-GUI (standalone application) In FxCop-GUI is no option to specify an own file name – but you could add a hint in the FxCop project file. Open your this file and look for the line: <CustomDictionaries SearchFxCopDir="True" SearchUserProfile="True" SearchProjectDir="True" /> Then change it to: <CustomDictionaries SearchFxCopDir="True" SearchUserProfile="True" SearchProjectDir="True"> <CustomDictionary Path="FxCop.CustomDictionary.xml"/> </CustomDictionaries> Ready :-)

    Read the article

  • .wine-pipelight folder not present

    - by DaimyoKirby
    Following the instructions on the pipelight installation page, I installed pipelight on Ubuntu 14.04. However, upon opening firefox the .wine-pipelight folder isn't present in my home folder, and I get the following errors: [PIPELIGHT:LIN:unknown] attached to process. [PIPELIGHT:LIN:unknown] checking environment variable PIPELIGHT_SILVERLIGHT5_1_CONFIG. [PIPELIGHT:LIN:unknown] searching for config file pipelight-silverlight5.1. [PIPELIGHT:LIN:unknown] trying to load config file from '/home/alden/.config/pipelight-silverlight5.1'. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:427:checkSilverlightGraphicDriver(): error in execlp command - probably silverlightGraphicDriverCheck not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:441:checkSilverlightGraphicDriver(): GPU driver check - Your driver is not in the whitelist, hardware acceleration disabled. [PIPELIGHT:LIN:silverlight5.1] using wine prefix directory /home/alden/.wine-pipelight. [PIPELIGHT:LIN:silverlight5.1] checking plugin installation - this might take some time. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:374:checkPluginInstallation(): error in execvp command - probably dependencyInstaller/sandbox not found or missing execute permission. [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:384:checkPluginInstallation(): Plugin installer did not run correctly (exitcode = 1). [PIPELIGHT:LIN:silverlight5.1] basicplugin.c:108:attach(): plugin not correctly installed - aborting. I've reinstalled quite a few times and ran through many of the common fixes offered on the pipelight Launchpad pages and here on AskUbunta and still it fails to run. Is there a reason why this folder isn't present, or why I'm getting these errors? Edit: Oddly enough, the .wine-pipelight folder is created wtih I open Nitro, although this still doesn't fix the issue.

    Read the article

  • file-name encoding problems

    - by tenhouse
    I googled over this topic but couldn't find what I was looking for... the following "happend" to me: I had my files stored on a NTFS-USB Harddisk, because of space problems I moved them to an ext3 system....somehow the filename (content is still ok as far as I saw) encoding screwed up....my files look like the following now: Kküken <--- should have an "ü" Jäger <--- should be an "ä" Zwölf <--- should be an "ö" fünfte <-- should be an "ü" etc .... These are just examples, but already give me my first question Why has the "ü" two different representations? (Maybe I screw up, before I screw up and now I have a mixing of x different encoding-layers? :) ) I tried the following command: convmv -r -f UTF-8 -t ISO-8859-1 * This command work for some files (for example Zwölf) but not for all: iso-8859-1 doesn't cover all needed characters for: "fünfte" So Iguess it must be another encoding - but which? How can I find out this? And is there any way that I can still fix all of this?

    Read the article

  • Can not run ifconfig like commands via browser

    - by savruk
    Hi, Problem is I cannot run "ifconfig" or similar commands via browser. Environment: Programming language : python Server : lighttpd(CGI) , running on busybox. Well machine is really small and so I am really restricted. Tried techniques: chown every script to root. But there is no differences. Why? Because lighttpd runs under another user, I mean not under root. As it is not root, when I try to run script from browser it always calls the python file with its uid. So it makes it impossible to run "ifconfig eth0 192.168.2.123" like commands via web browser. I get "ifconfig: SIOCSIFADDR: Permission denied" error. What can I do? I do not have any sudoers file, so cannot modify sudo command. Well, I don't even have "sudo" command :) Thanks for your help

    Read the article

  • How to stop pptpd even when there are active vpn client connections?

    - by Michael Z
    After issued command to stop pptpd, the pptpd won't stop until all the VPN client has disconnected. The following code shows pptpd is still running after issuing the stop command. ubuntu@ip-10-138-31-87:~$ sudo /etc/init.d/pptpd stop Stopping PPTP: pptpd. ubuntu@ip-10-138-31-87:~$ ps -ef |grep pptpd root 5524 1 0 21:46 ? 00:00:00 pptpd [<myIp>:8544 - 0000] root 5525 5524 0 21:46 pts/1 00:00:00 /usr/sbin/pppd local file /etc/ppp/pptpd-options 115200 192.168.0.1:192.168.0.234 ipparam <myIP> plugin /usr/lib/pptpd/pptpd-logwtmp.so pptpd-original-ip <myIP> ubuntu 5564 4668 0 21:50 pts/4 00:00:00 grep --color=auto pptpd After all the active vpn client connections were disconnected mannually, the pptpd then stops. Is there a way that pptpd can be forced to stop even there are active vpn client connections?

    Read the article

  • How to add assemblies in a 64-bit machine?

    - by marko
    My old cmd-script: C:\Windows\Microsoft.NET\Framework\v2.0.50727\RegAsm blabla.dll C:\Windows\Microsoft.NET\Framework\v2.0.50727\GacUtil -i blabla.dll (Which works fine in my old machine.) But now I have a script for a 64-bit machine (Windows Server 2008 R2): C:\Windows\Microsoft.NET\Framework64\v2.0.50727\RegAsm blabla.dll C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools\GacUtil -i blabla.dll Then I get this message: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\RegAsm blabla.dll Microsoft (R) .NET Framework Assembly Registration Utility 2.0.50727.5420 Copyright (C) Microsoft Corporation 1998-2004. All rights reserved. Types registered successfully C:\Program Files\Microsoft SDKs\Windows\v7 .1\Bin\NETFX 4.0 Tools\GacUtil -i blabla.dll 'C:\Program' is not recognized as an internal or external command, operable program or batch file. The second command is not successful.

    Read the article

  • Commands don't have permission when using absolute path

    - by Markos
    I have folders set up this way: /srv/samba/video getfacl /srv/samba/video # file: srv/samba/video # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx group:deluge:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:group:deluge:rwx default:mask::rwx default:other::--- That means, user deluge has rwx to folder /srv/samba/video. However, when running command as user deluge, I am getting weird permission errors. When in folder /srv/samba/video: sudo -u deluge mkdir foo works flawlessly. But when using absolute path: sudo -u deluge mkdir /srv/samba/video/foo I am getting permission denied. When running sudo -u deluge id, I get output uid=113(deluge) gid=124(deluge) skupiny=124(deluge) which shows that user deluge is indeed in group deluge. Also, the behavior was the same when I gave the permissions also to user deluge not just group deluge. When executing as non-system user, it does work. The reason that I want to use absolute paths is that I am using automatically triggered post-download script which extracts some files into the folder. I have spent way too many hours to solve this problem myself. mkdir isn't the only command that fails, touch is doing the same thing, so I suspect that it's not mkdir's fault. If you need more info, I will try to put it in here, just ask. Thanx in advance. Edit: It seems that the root of the problem is acl set on perent folder /srv/samba, which indeed does not grant permissions to deluge (but neither denies it). getfacl /srv/samba # file: srv/samba # owner: root # group: nogroup user::rwx group::--- group:sambaclients:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:sambaclients:rwx default:mask::rwx default:other::--- If I grant the permission also to this folder, it suddenly starts to work so I believe that the acl on /srv/samba is somehow denying the permissions to deluge. So the question is: how do I set acl to both /srv/samba and /srv/samba/video so that sambaclients have access to whole /srv/samba and subdirectories and deluge has access only to /srv/samba/video and subdirectories?

    Read the article

  • Builder Pattern: When to fail?

    - by skiwi
    When implementing the Builder Pattern, I often find myself confused with when to let building fail and I even manage to take different stands on the matter every few days. First some explanation: With failing early I mean that building an object should fail as soon as an invalid parameter is passed in. So inside the SomeObjectBuilder. With failing late I mean that building an object only can fail on the build() call that implicitely calls a constructor of the object to be built. Then some arguments: In favor of failing late: A builder class should be no more than a class that simply holds values. Moreover, it leads to less code duplication. In favor of failing early: A general approach in software programming is that you want to detect issues as early as possible and therefore the most logical place to check would be in the builder class' constructor, 'setters' and ultimately in the build method. What is the general concensus about this?

    Read the article

  • OO Design, how to model Tonal Harmony?

    - by David
    I have started to write a program in C++ 11 that would analyse chords, scales, and harmony. The biggest problem I am having in my design phase, is that the note 'C' is a note, a type of chord (Cmaj, Cmin, C7, etc), and a type of key (the key of Cmajor, Cminor). The same issue arises with intervals (minor 3rd, major 3rd). I am using a base class, Token, that is the base class for all 'symbols' in the program. so for example: class Token { public: typedef shared_ptr<Token> pointer_type; Token() {} virtual ~Token() {} }; class Command : public Token { public: Command() {} pointer_type execute(); } class Note : public Token; class Triad : public Token; class MajorTriad : public Triad; // CMajorTriad, etc class Key : public Token; class MinorKey : public Key; // Natural Minor, Harmonic minor,etc class Scale : public Token; As you can see, to create all the derived classes (CMajorTriad, C, CMajorScale, CMajorKey, etc) would quickly become ridiculously complex including all the other notes, as well as enharmonics. multiple inheritance would not work, ie: class C : public Note, Triad, Key, Scale class C, cannot be all of these things at the same time. It is contextual, also polymorphing with this will not work (how to determine which super methods to perform? calling every super class constructors should not happen here) Are there any design ideas or suggestions that people have to offer? I have not been able to find anything on google in regards to modelling tonal harmony from an OO perspective. There are just far too many relationships between all the concepts here.

    Read the article

  • Disable Password Complexity/Expiration etc. Policy on Windows Server 2008

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). One of the things I like to do, for development environments only is to get rid of that excessively bothersome password policies. I like to have my password as something like p@ssword1, so they are easy to remember etc. etc. Obviously never do this in production. However, Windows Server 2008 comes with a password policy that expires my passwords every 90 days, and requires me to pick complex passwords, can’t reuse passwords etc. etc. Well here is how you disable password policy on a Windows Server 2008 machine - Run Group Policy Management (gpmc.msc) Expand to your domain, look for Forest\Domains\yourdomain\default domain policy. Go to the settings tab, right click on the tab, and choose “Edit”. This will open the Group Policy Management Editor, in which - Go to Computer Configuration\Policies\Windows Settings\Security Settings\Account Policies\Password Policy, and change the policy to whatever that suits you. Close everything, and run command prompt as administrator, and issue a “gpupdate /force” command to force the group policy update on the machine. Restart, and you’re done! :) Comment on the article ....

    Read the article

  • Clusterware 11gR2 &ndash; Setting up an Active/Passive failover configuration

    - by Gilles Haro
    Oracle provides many interesting ways to ensure High Availability. Dataguard configurations, RAC configurations or even both (as recommended for a Maximum Available Architecture - MAA) are the most frequently found. But when it comes to protecting a system with an Active/Passive architecture with failover capabilities, one often thinks to expensive third party cluster systems. Oracle Clusterware technology, which comes free with Oracle Database, is – in the knowing of most people - often linked to Oracle RAC and therefore, is rarely used to implement failover solutions. 11gR2 Clusterware – which is part of Oracle Grid Infrastructure - provides a comprehensive framework to setup automatic failover configurations. It is actually possible to make “failover-able'” and, therefore to protect, almost every kind of application (from xclock to the more complex Application Server) In the next couple of lines, I will try to present the different steps to achieve this goal : Have a fully operational 11gR2 database protected by automatic failover capabilities. I assume you are fluent in installing Oracle Database 11gR2, Oracle Grid Infrastructure 11gR2 on a Linux system and that ASM is not a problem for you (as I am using it as a shared storage). If not, please have a look at Oracle Documentation. As often, I made my tests using an Oracle VirtualBox environment. The scripts are tested and functional. Unfortunately, there can always be a typo or a mistake. This blog entry is not a course around the Clusterware Framework. I just hope it will let you see how powerful it is and that it will give you the whilst to go further with it…   Prerequisite 2 Linux boxes (OELCluster01 and OELCluster02) at the same OS level. I used OEL 5 Update 5 with Enterprise Kernel. Shared Storage (SAN). On my VirtualBox system, I used Openfiler to simulate the SAN Oracle 11gR2 Database (11.2.0.1) Oracle 11gR2 Grid Infrastructure (11.2.0.1)   Step 1 – Install the software Using asmlib, create 3 ASM disks (ASM_CRS, ASM_DTA and ASM_FRA) Install Grid Infrastructure for a cluster (OELCluster01 and OELCluster02 are the 2 nodes of the cluster) Use ASM_CRS to store Voting Disk and OCR. Use SCAN. Install Oracle Database Standalone binaries on both nodes. Use asmca to check/mount the disk groups on 2 nodes Use dbca to create and configure a database on the primary node Let’s name it DB11G. Copy the pfile, password file to the second node. Create adump directoty on the second node.   Step 2 - Setup the resource to be protected After its creation with dbca, the database is automatically protected by the Oracle Restart technology available with Grid Infrastructure. Consequently, it restarts automatically (if possible) after a crash (ex: kill –9 smon). A database resource has been created for that in the Cluster Registry. We can observe this with the command : crsctl status resource that shows and ora.dba11g.db entry. Let’s save the definition of this resource, for future use : mkdir –p /crs/11.2.0/HA_scripts chown oracle:oinstall /crs/11.2.0/HA_scripts crsctl status resource ora.db11g.db -p > /crs/11.2.0/HA_scripts/myResource.txt Although very interesting, Oracle Restart is not cluster aware and cannot restart the database on any other node of the cluster. So, let’s remove it from the OCR definitions, we don’t need it ! srvctl stop database -d DB11G srvctl remove database -d DB11G Instead of it, we need to create a new resource of a more general type : cluster_resource. Here are the steps to achieve this : Create an action script :  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh #!/bin/bash export ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 export ORACLE_SID=DB11G case $1 in 'start')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   startup EOF   RET=0   ;; 'stop')   $ORACLE_HOME/bin/sqlplus /nolog <<EOF   connect / as sysdba   shutdown immediate EOF   RET=0   ;; 'check')    ok=`ps -ef | grep smon | grep $ORACLE_SID | wc -l`    if [ $ok = 0 ]; then      RET=1    else      RET=0    fi    ;; '*')      RET=0   ;; esac if [ $RET -eq 0 ]; then    exit 0 else    exit 1 fi   This script must provide, at least, methods to start, stop and check the database. It is self-explaining and contains nothing special. Just be aware that it is run as Oracle user (because of the ACL property – see later) and needs to know about the environment. It also needs to be present on every node of the cluster. chmod +x /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh scp  /crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh   oracle@OELCluster02:/crs/11.2.0/HA_scripts Create a new resource file, based on the information we got from previous  myResource.txt . Name it myNewResource.txt. myResource.txt  is shown below. As we can see, it defines an ora.database.type resource, named ora.db11g.db. A lot of properties are related to this type of resource and do not need to be used for a cluster_resource. NAME=ora.db11g.db TYPE=ora.database.type ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT= ACTIVE_PLACEMENT=1 AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX% AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=1 CHECK_TIMEOUT=600 CLUSTER_DATABASE=false DB_UNIQUE_NAME=DB11G DEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) DEGREE=1 DESCRIPTION=Oracle Database resource ENABLED=1 FAILOVER_DELAY=0 FAILURE_INTERVAL=60 FAILURE_THRESHOLD=1 GEN_AUDIT_FILE_DEST=/oracle/admin/DB11G/adump GEN_USR_ORA_INST_NAME= GEN_USR_ORA_INST_NAME@SERVERNAME(oelcluster01)=DB11G HOSTING_MEMBERS= INSTANCE_FAILOVER=0 LOAD=1 LOGGING_LEVEL=1 MANAGEMENT_POLICY=AUTOMATIC NLS_LANG= NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 ROLE=PRIMARY SCRIPT_TIMEOUT=60 SERVER_POOLS=ora.DB11G SPFILE=+DTA/DB11G/spfileDB11G.ora START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STATE_CHANGE_TEMPLATE= STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h USR_ORA_DB_NAME=DB11G USR_ORA_DOMAIN=haroland USR_ORA_ENV= USR_ORA_FLAGS= USR_ORA_INST_NAME=DB11G USR_ORA_OPEN_MODE=open USR_ORA_OPI=false USR_ORA_STOP_MODE=immediate VERSION=11.2.0.1.0 I removed database type related entries from myResource.txt and modified some other to produce the following myNewResource.txt. Notice the NAME property that should not have the ora. prefix Notice the TYPE property that is not ora.database.type but cluster_resource. Notice the definition of ACTION_SCRIPT. Notice the HOSTING_MEMBERS that enumerates the members of the cluster (as returned by the olsnodes command). NAME=DB11G.db TYPE=cluster_resource DESCRIPTION=Oracle Database resource ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r-- ACTION_SCRIPT=/crs/11.2.0/HA_scripts/my_ActivePassive_Cluster.sh PLACEMENT=restricted ACTIVE_PLACEMENT=0 AUTO_START=restore CARDINALITY=1 CHECK_INTERVAL=10 DEGREE=1 ENABLED=1 HOSTING_MEMBERS=oelcluster01 oelcluster02 LOGGING_LEVEL=1 RESTART_ATTEMPTS=1 START_DEPENDENCIES=hard(ora.DTA.dg,ora.FRA.dg) weak(type:ora.listener.type,uniform:ora.ons,uniform:ora.eons) pullup(ora.DTA.dg,ora.FRA.dg) START_TIMEOUT=600 STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DTA.dg,shutdown:ora.FRA.dg) STOP_TIMEOUT=600 UPTIME_THRESHOLD=1h Register the resource. Take care of the resource type. It needs to be a cluster_resource and not a ora.database.type resource (Oracle recommendation) .   crsctl add resource DB11G.db  -type cluster_resource -file /crs/11.2.0/HA_scripts/myNewResource.txt Step 3 - Start the resource crsctl start resource DB11G.db This command launches the ACTION_SCRIPT with a start and a check parameter on the primary node of the cluster. Step 4 - Test this We will test the setup using 2 methods. crsctl relocate resource DB11G.db This command calls the ACTION_SCRIPT  (on the two nodes)  to stop the database on the active node and start it on the other node. Once done, we can revert back to the original node, but, this time we can use a more “MS$ like” method :Turn off the server on which the database is running. After short delay, you should observe that the database is relocated on node 1. Conclusion Once the software installed and the standalone database created (which is a rather common and usual task), the steps to reach the objective are quite easy : Create an executable action script on every node of the cluster. Create a resource file. Create/Register the resource with OCR using the resource file. Start the resource. This solution is a very interesting alternative to licensable third party solutions.   References Clusterware 11gR2 documentation Oracle Clusterware Resource Reference   Gilles Haro Technical Expert - Core Technology, Oracle Consulting   

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Non-zero exit status for clean exit

    - by trinithis
    Is it acceptable to return a non-zero exit code if the program in question ran properly? For example, say I have a simple program that (only) does the following: Program takes N arguments. It returns an exit code of min(N, 255). Note that any N is valid for the program. A more realistic program might return different codes for successfully ran programs that signify different things. Should these programs instead write this information to a stream instead, such as to stdout?

    Read the article

  • Reinstall Ubuntu keeping my data intact

    - by Magnus
    I have recently upgraded my desktop OS from ubntu 12.04 to 12.10 (complete reinstall). Before the switch i made a list of all programs installed on my Ubuntu 12.04. sudo dpkg --get-selections > file After that i reinstalled Ubuntu 12.10 and when all was done i performed the following command: sudo dpkg --set-selections < file sudo apt-get dselect-upgrade Here is when the problems start, I get several warnings like this when performing the commands above: dpkg: warning: package not in database at line xxx and many of the programs is not installed. I don't know what the line means. I have serched the web and it seems that I'm not the only one suffering from this but I have not find any solution that worked for me. Any ideas what is causing this? Regards Magnus Örberg

    Read the article

  • Crontab -e gives me error messages

    - by DNA
    I get a bunch of error messages when I run crontab -e Here are the error messages. And here is my crontab file under `/usr/bin/': # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 30 * * * * root rsync /home/dnaneet/Downloads/*.pdf /home/dnaneet/Downloads/pdfs/ # I notice that the last task ('rsync') NEVER RUNS! Why is this happening? What did I do wrong? Running Ubuntu 11.10/Bash. I have read this... Am I missing a shebang? And I don't know if my anacron jobs run. Edit 1 In light of Masi's comment, I commented out lines 17 thru 25 of my crontab file with #. Now when I run sudo crontab -e, all I get is: /usr/bin/crontab: 11: 17: not found /usr/bin/crontab: 12: 25: not found (gedit:4301): Gtk-WARNING **: Attempting to store changes into `/root/.local/share/recently-used.xbel', but failed: Failed to create file '/root/.local/share/recently-used.xbel.GOHVBW': No such file or directory (gedit:4301): Gtk-WARNING **: Attempting to set the permissions of `/root/.local/share/recently-used.xbel', but failed: No such file or directory What in the world?

    Read the article

  • Java - 2d Array Tile Map Collision

    - by Corey
    How would I go about making certain tiles in my array collide with my player? Like say I want every number 2 in the array to collide. I am reading my array from a txt file if that matters and I am using the slick2d library. Here is my code if needed. public class Tiles { Image[] tiles = new Image[3]; int[][] map = new int[500][500]; Image grass, dirt, mound; SpriteSheet tileSheet; int tileWidth = 32; int tileHeight = 32; public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.txt")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); y=y+1; } //System.out.println(""); x=x+1; y = 0; } in.close(); } public void update() { } public void render(GameContainer gc) { for(int x = 0; x < 50; x++) { for(int y = 0; y < 50; y ++) { int textureIndex = map[y][x]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } } I tried something like this, but I it doesn't ever "collide". X and y are my player position. if (tiles.map[(int)x/32][(int)y/32] == 2) { System.out.println("Collided"); }

    Read the article

  • How do I tar dot files but not dot directories

    - by bjackfly
    The following tar command will exclude all dot files and dot directories. tar -cvzf /media/bjackfly/bkup/bkup.gz --exclude '.*' --one-file-system /home/bjackfly In my case I want the dot files to be backed up in the home directory (.vimrc, .bashrc) etc. but not the dot directories /.config /.cache /.eclipse etc. Any Linux gurus with a command for this, or do I need to run a find into a tar or do two different tar commands which is non-ideal? One for dot files in the home directory and one for everything else?

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >