Search Results

Search found 26912 results on 1077 pages for 'default programs'.

Page 238/1077 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • Win 7 - Restore Favorites in Windows Explorer

    - by oceola
    Hi all, have this issue - the Favorites link in windows explorer doesn't work. I can't drag and drop anything to it, I can't 'Add current location to Favorites'. Clicking on 'Restore Favorites' does nothing. I can't remember when this started, but I assume I accidentally deleted the Favorites folder. I should probably mention that my user profile is ntfs-junctioned to D:\Users\myname. I tried creating a new Favorites folder, giving it all possible permissions, but that doesn't work. I tried to look in the registry, under HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Explorer\Shell folders HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Explorer\User Shell folders HKEY_USERS\.default\Software\Microsoft\Windows\Current Version\Explorer\Shell folders HKEY_USERS\.default\Software\Microsoft\Windows\Current Version\Explorer\User Shell folders I played with the values in there (pointing to C:\Users\myname\Favorites, D:\Users\myname\Favorites), but nothing seemed to help. Any help would be much appreciated.

    Read the article

  • Nothing happens when trying to upgrade from Linux Mint 12 to 13

    - by Ares
    I am trying to upgrade from Linux Mint 12 to 13 using apt-get. After running the following commands, nothing seems to happen. I am fairly new to linux, what am I doing wrong? user@olympus /etc/default $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. user@olympus /etc/default $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Read the article

  • Does NMBD depend on DHCP?

    - by Atilla Filiz
    I am trying to debug a SMB share issue on an embedded Linux setup. Before diving into source code, I want to make sure this is not a configuration problem. So here is my case: Scenario-1: dhcp server enabled by default 1- system boots 2- udhcpcd server starts 3- smb server starts (smbd) 4- nmb server starts (nmbd) 5- smb share accessible Scenario-2: dhcp server disabled by default 1- system boots 2- smbd starts 3- nmbd fails to start 4- smb share inaccessible 5- $/etc/init.d/udhcpcd start 6- $/usr/sbin/nmbd still fails without an error message The client pc and the server device have static IP addresses in both cases. Is it possible that, NMBD somehow depends on a DHCP server at start?

    Read the article

  • BAD DC transfering FSMO Roles to ADC

    - by Suleman
    I have a DC (FQDN:server.icmcpk.local) and an ADC (FQDN:file-server.icmcpk.local). Recently my DC is facing a bad sector problem so I changed the Operation Masters to file-server for all five roles. but when ever i turn off the OLD DC the file-server also stops wroking with AD and GPMC further i m also unable to join any other computer to this domain. For Test purpose i also added a new ADC (FQDN:wds-server.icmcpk.local) but no succes with the old DC off i had to turn the old DC on and then joined it. I m attaching the Dcdiags for all three servers. Kindly help me so that i b able to reinstall new HDD and it can go online again. --------------------------------------- Server --------------------------------------- C:\Program Files\Support Tools>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\SERVER Starting test: Connectivity ......................... SERVER passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\SERVER Starting test: Replications [Replications Check,SERVER] A recent replication attempt failed: From FILE-SERVER to SERVER Naming Context: DC=ForestDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: DC=ForestDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From FILE-SERVER to SERVER Naming Context: DC=DomainDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: DC=DomainDnsZones,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From FILE-SERVER to SERVER Naming Context: CN=Schema,CN=Configuration,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: CN=Schema,CN=Configuration,DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. [Replications Check,SERVER] A recent replication attempt failed: From WDS-SERVER to SERVER Naming Context: DC=icmcpk,DC=local The replication generated an error (1908): Could not find the domain controller for this domain. The failure occurred at 2012-05-04 14:07:13. The last success occurred at 2012-05-04 13:48:39. 1 failures have occurred since the last success. Kerberos Error. A KDC was not found to authenticate the call. Check that sufficient domain controllers are available. ......................... SERVER passed test Replications Starting test: NCSecDesc ......................... SERVER passed test NCSecDesc Starting test: NetLogons ......................... SERVER passed test NetLogons Starting test: Advertising ......................... SERVER passed test Advertising Starting test: KnowsOfRoleHolders ......................... SERVER passed test KnowsOfRoleHolders Starting test: RidManager ......................... SERVER passed test RidManager Starting test: MachineAccount ......................... SERVER passed test MachineAccount Starting test: Services ......................... SERVER passed test Services Starting test: ObjectsReplicated ......................... SERVER passed test ObjectsReplicated Starting test: frssysvol ......................... SERVER passed test frssysvol Starting test: frsevent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. ......................... SERVER failed test frsevent Starting test: kccevent ......................... SERVER passed test kccevent Starting test: systemlog An Error Event occured. EventID: 0x80001778 Time Generated: 05/04/2012 14:05:39 Event String: The previous system shutdown at 1:26:31 PM on An Error Event occured. EventID: 0x825A0011 Time Generated: 05/04/2012 14:07:45 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:13:40 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:14:25 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:14:25 (Event String could not be retrieved) An Error Event occured. EventID: 0x00000457 Time Generated: 05/04/2012 14:14:38 (Event String could not be retrieved) An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:16:14 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Generate Activation Context failed for An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:16:14 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:16:14 Event String: Generate Activation Context failed for An Error Event occured. EventID: 0x825A0011 Time Generated: 05/04/2012 14:22:57 (Event String could not be retrieved) An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:22:59 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Generate Activation Context failed for An Error Event occured. EventID: 0xC1010020 Time Generated: 05/04/2012 14:22:59 Event String: Dependent Assembly Microsoft.VC80.MFCLOC could An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Resolve Partial Assembly failed for An Error Event occured. EventID: 0xC101003B Time Generated: 05/04/2012 14:22:59 Event String: Generate Activation Context failed for ......................... SERVER failed test systemlog Starting test: VerifyReferences ......................... SERVER passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : icmcpk Starting test: CrossRefValidation ......................... icmcpk passed test CrossRefValidation Starting test: CheckSDRefDom ......................... icmcpk passed test CheckSDRefDom Running enterprise tests on : icmcpk.local Starting test: Intersite ......................... icmcpk.local passed test Intersite Starting test: FsmoCheck ......................... icmcpk.local passed test FsmoCheck ---------------------- File-Server ---------------------- C:\Users\Administrator.ICMCPK>dcdiag Directory Server Diagnosis Performing initial setup: Trying to find home server... Home Server = FILE-SERVER * Identified AD Forest. Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\FILE-SERVER Starting test: Connectivity ......................... FILE-SERVER passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\FILE-SERVER Starting test: Advertising Warning: DsGetDcName returned information for \\Server.icmcpk.local, when we were trying to reach FILE-SERVER. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... FILE-SERVER failed test Advertising Starting test: FrsEvent ......................... FILE-SERVER passed test FrsEvent Starting test: DFSREvent ......................... FILE-SERVER passed test DFSREvent Starting test: SysVolCheck ......................... FILE-SERVER passed test SysVolCheck Starting test: KccEvent ......................... FILE-SERVER passed test KccEvent Starting test: KnowsOfRoleHolders ......................... FILE-SERVER passed test KnowsOfRoleHolders Starting test: MachineAccount ......................... FILE-SERVER passed test MachineAccount Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=icmcpk,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=icmcpk,DC=local ......................... FILE-SERVER failed test NCSecDesc Starting test: NetLogons Unable to connect to the NETLOGON share! (\\FILE-SERVER\netlogon) [FILE-SERVER] An net use or LsaPolicy operation failed with error 67, The network name cannot be found.. ......................... FILE-SERVER failed test NetLogons Starting test: ObjectsReplicated ......................... FILE-SERVER passed test ObjectsReplicated Starting test: Replications ......................... FILE-SERVER passed test Replications Starting test: RidManager ......................... FILE-SERVER passed test RidManager Starting test: Services ......................... FILE-SERVER passed test Services Starting test: SystemLog An Error Event occurred. EventID: 0x00000469 Time Generated: 05/04/2012 14:01:10 Event String: The processing of Group Policy failed because of lack of network con nectivity to a domain controller. This may be a transient condition. A success m essage would be generated once the machine gets connected to the domain controll er and Group Policy has succesfully processed. If you do not see a success messa ge for several hours, then contact your administrator. An Warning Event occurred. EventID: 0x8000A001 Time Generated: 05/04/2012 14:07:11 Event String: The Security System could not establish a secured connection with th e server ldap/icmcpk.local/[email protected]. No authentication protocol was available. An Warning Event occurred. EventID: 0x00000BBC Time Generated: 05/04/2012 14:30:34 Event String: Windows Defender Real-Time Protection agent has detected changes. Mi crosoft recommends you analyze the software that made these changes for potentia l risks. You can use information about how these programs operate to choose whet her to allow them to run or remove them from your computer. Allow changes only if you trust the program or the software publisher. Windows Defender can't undo changes that you allow. An Warning Event occurred. EventID: 0x00000BBC Time Generated: 05/04/2012 14:30:36 Event String: Windows Defender Real-Time Protection agent has detected changes. Mi crosoft recommends you analyze the software that made these changes for potentia l risks. You can use information about how these programs operate to choose whet her to allow them to run or remove them from your computer. Allow changes only if you trust the program or the software publisher. Windows Defender can't undo changes that you allow. ......................... FILE-SERVER failed test SystemLog Starting test: VerifyReferences ......................... FILE-SERVER passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : icmcpk Starting test: CheckSDRefDom ......................... icmcpk passed test CheckSDRefDom Starting test: CrossRefValidation ......................... icmcpk passed test CrossRefValidation Running enterprise tests on : icmcpk.local Starting test: LocatorCheck ......................... icmcpk.local passed test LocatorCheck Starting test: Intersite ......................... icmcpk.local passed test Intersite --------------------- WDS-Server --------------------- C:\Users\Administrator.ICMCPK>dcdiag Directory Server Diagnosis Performing initial setup: Trying to find home server... Home Server = WDS-SERVER * Identified AD Forest. Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\WDS-SERVER Starting test: Connectivity ......................... WDS-SERVER passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\WDS-SERVER Starting test: Advertising Warning: DsGetDcName returned information for \\Server.icmcpk.local, when we were trying to reach WDS-SERVER. SERVER IS NOT RESPONDING or IS NOT CONSIDERED SUITABLE. ......................... WDS-SERVER failed test Advertising Starting test: FrsEvent There are warning or error events within the last 24 hours after the SYSVOL has been shared. Failing SYSVOL replication problems may cause Group Policy problems. ......................... WDS-SERVER passed test FrsEvent Starting test: DFSREvent ......................... WDS-SERVER passed test DFSREvent Starting test: SysVolCheck ......................... WDS-SERVER passed test SysVolCheck Starting test: KccEvent ......................... WDS-SERVER passed test KccEvent Starting test: KnowsOfRoleHolders ......................... WDS-SERVER passed test KnowsOfRoleHolders Starting test: MachineAccount ......................... WDS-SERVER passed test MachineAccount Starting test: NCSecDesc Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=ForestDnsZones,DC=icmcpk,DC=local Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context: DC=DomainDnsZones,DC=icmcpk,DC=local ......................... WDS-SERVER failed test NCSecDesc Starting test: NetLogons Unable to connect to the NETLOGON share! (\\WDS-SERVER\netlogon) [WDS-SERVER] An net use or LsaPolicy operation failed with error 67, The network name cannot be found.. ......................... WDS-SERVER failed test NetLogons Starting test: ObjectsReplicated ......................... WDS-SERVER passed test ObjectsReplicated Starting test: Replications ......................... WDS-SERVER passed test Replications Starting test: RidManager ......................... WDS-SERVER passed test RidManager Starting test: Services ......................... WDS-SERVER passed test Services Starting test: SystemLog An Error Event occurred. EventID: 0x0000041E Time Generated: 05/04/2012 14:02:55 Event String: The processing of Group Policy failed. Windows could not obtain the name of a domain controller. This could be caused by a name resolution failure. Verify your Domain Name Sysytem (DNS) is configured and working correctly. An Error Event occurred. EventID: 0x0000041E Time Generated: 05/04/2012 14:08:33 Event String: The processing of Group Policy failed. Windows could not obtain the name of a domain controller. This could be caused by a name resolution failure. Verify your Domain Name Sysytem (DNS) is configured and working correctly. ......................... WDS-SERVER failed test SystemLog Starting test: VerifyReferences ......................... WDS-SERVER passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Running partition tests on : DomainDnsZones Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Running partition tests on : Schema Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Running partition tests on : Configuration Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Running partition tests on : icmcpk Starting test: CheckSDRefDom ......................... icmcpk passed test CheckSDRefDom Starting test: CrossRefValidation ......................... icmcpk passed test CrossRefValidation Running enterprise tests on : icmcpk.local Starting test: LocatorCheck ......................... icmcpk.local passed test LocatorCheck Starting test: Intersite ......................... icmcpk.local passed test Intersite

    Read the article

  • Cannot login to SQL Server 2008 R2 with Windows authentication

    - by Ian Boyd
    When i try to connect to SQL Server (2008 R2) using Windows authentication: i cannot: Checking the Windows Application event log, i find the error: Login failed for user 'AVATOPIA\ian'. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors. [CLIENT: ] Log Name: Application Source: MSSQLSERVER Event ID: 18456 Level: Information User: AVATOPIA\ian OpCode: Task Category: Logon i can login to the computer itself using Windows authentication. i can log into SQL Server using the local Windows Administrator account. We can connect to 8 other SQL Servers on the domain using Windows Authentication. Just this one, whitch is the only one that is 2008 R2 is failing. So i assume it's a bug with *2008 R2. Note: i cannot logon locally, or remotely, using Windows authentication. i can login locally and remotely using SQL Server Authentication. Update Note: It's not limited to SQL Server Management Studio, standalone applications that connect using Windows authentication: fail: Note: It's not a client problem, as we can connect fine to other (non-SQL Server 2008 R2 machines): i'm sure there's a technote or knowledge base article describing why SQL Server 2008 R2 is broken by default, but i can't find it. Update 2 Matt figure out the change that Microsoft made so that SQL Server 2008 R2 is broken by default: Administrators are no longer administrators All that remains is to figure out how to make Administrators administrators. One of these days i'm going to start a list of changes around Microsoft's "broken by default" initiative. Steps to reproduce the problem How do i add a group to the sysadmin fixed server role? Here's the steps i try, that don't work: Click Add: Click Object Types: Ensure that you have no ability to add groups: and click OK. Under Enter the object names to select, enter Administrators: Click Check Names, and ensure that you are not allowed to add groups: and click Cancel. Click Browse..., and ensure that you have no ability to add groups: You should now still not have added any group to the sysadmin role. Additional information SQL Server Management Studio is being run as an administrator: SQL Server is set to use Windows Authentication: tried while logged into SQL with both sa and the only other sysadmin domain account (screenshot can be supplied for those who don't believe)

    Read the article

  • Graphite SQLite3 DatabaseError: attempt to write a readonly database

    - by Anadi Misra
    Running graphite under apache httpd, with slqite database, I have the correct folder permissions [root@liaan55 httpd]# ls -ltr /var/lib | grep graphite drwxr-xr-x. 2 apache apache 4096 Aug 23 19:36 graphite-web and [root@liaan55 httpd]# ls -ltr /var/lib/graphite-web/ total 68 -rw-r--r--. 1 apache apache 65536 Aug 23 19:46 graphite.db syncdb also seems to have gone fine [root@liaan55 httpd]# sudo -su apache bash-4.1$ whoami apache bash-4.1$ python /usr/lib/python2.6/site-packages/graphite/manage.py syncdb /usr/lib/python2.6/site-packages/graphite/settings.py:231: UserWarning: SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security warn('SECRET_KEY is set to an unsafe default. This should be set in local_settings.py for better security') /usr/lib/python2.6/site-packages/django/conf/__init__.py:75: DeprecationWarning: The ADMIN_MEDIA_PREFIX setting has been removed; use STATIC_URL instead. "use STATIC_URL instead.", DeprecationWarning) /usr/lib/python2.6/site-packages/django/core/cache/__init__.py:82: DeprecationWarning: settings.CACHE_* is deprecated; use settings.CACHES instead. DeprecationWarning Creating tables ... Creating table account_profile Creating table account_variable Creating table account_view Creating table account_window Creating table account_mygraph Creating table dashboard_dashboard_owners Creating table dashboard_dashboard Creating table events_event Creating table auth_permission Creating table auth_group_permissions Creating table auth_group Creating table auth_user_user_permissions Creating table auth_user_groups Creating table auth_user Creating table django_session Creating table django_admin_log Creating table django_content_type Creating table tagging_tag Creating table tagging_taggeditem You just installed Django's auth system, which means you don't have any superusers defined. Would you like to create one now? (yes/no): yes Username (leave blank to use 'apache'): root E-mail address: [email protected] Password: Password (again): Superuser created successfully. Installing custom SQL ... Installing indexes ... Installed 0 object(s) from 0 fixture(s) bash-4.1$ exit and the local-settings.py file is as follows STORAGE_DIR = '/var/lib/graphite-web' INDEX_FILE = '/var/lib/graphite-web/index' DATABASES = { 'default': { 'NAME': '/var/lib/graphite-web/graphite.db', 'ENGINE': 'django.db.backends.sqlite3', 'USER': '', 'PASSWORD': '', 'HOST': '', 'PORT': '' } } I still get this error [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] File "/usr/lib/python2.6/site-packages/django/db/backends/sqlite3/base.py", line 344, in execute [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] return Database.Cursor.execute(self, query, params) [Sat Aug 23 19:47:17 2014] [error] [client 10.42.33.238] DatabaseError: attempt to write a readonly database not sure what is missing in this configuration

    Read the article

  • two domains two servers one dynamic ip address

    - by giantman
    as i said i have 2 domain hi.org and bye.net and one dynamic ip address and two servers. i want to attach one domain bye.net to server1 and hi.org to server2. using apache wamp 2.0i. i hope someone will be able to answer. ` httpd.conf file additions ProxyRequests Off Order deny,allow Allow from all vhost file additions NameVirtualHost *:80 default DocumentRoot "c:/wamp/www/fallback" Server 1 DocumentRoot "c:/wamp/www" ServerName h**p://bye.net ServerAlias bye.net Server 2 ProxyPreserveHost On ProxyPass / h*p://192.168.1.119/ DocumentRoot "g:/wamp/www" ServerName h*p://hi.org ServerAlias hi.org ` after doing all this i fallback to server1 only i don't get the page hi.org i only get the page bye.net, i don't even get the default fallback page which gets executed when a person enters ip address but not the domain name. i use windows 7 (server2) and windows xp (server 1)

    Read the article

  • /etc/enviorment not being read into PATH variable

    - by Dan
    In Ubuntu the path variable is stored in /etc/enviorment. This is mine (I've made no changes to it, this is the system default): $ cat /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" but when I examine my PATH variable: $ echo $PATH /home/dan/bin:/home/dan/bin:/bin:/usr/bin:/usr/local/bin:/usr/bin/X11 You'll notice /usr/games is missing (it was there up until a few days ago). My /etc/profile makes no mention of PATH. My ~/.profile is the default and only has: if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH" fi This only happens in gnome, not in tty1-6. This is missing from both gnome terminal and when I try to call applications from the applications dropdown. Anyone know what could be causing this? Thanks.

    Read the article

  • Create user in Oracle 11g with same priviledges as in Oracle 10g XE

    - by Álvaro G. Vicario
    I'm a PHP developer (not a DBA) and I've been working with Oracle 10g XE for a while. I'm used to XE's simplified user management: Go to Administration/ Users/ Create user Assign user name and password Roles: leave the default ones (connect and resource) Privileges: click on "Enable all" to select the 11 possible ones Create This way I get a user that has full access to its data and no access to everything else. This is fine since I only need it to develop my app. When the app is to be deployed, the client's DBAs configure the environment. Now I have to create users in a full Oracle 11g server and I'm completely lost. I have a new concept (profiles) and there're like 20 roles and hundreds of privileges in various categories. What steps do I need to complete in Oracle Enterprise Manager in order to obtain a user with the same privileges I used to assign in XE? ==== UPDATE ==== I think I'd better provide a detailed explanation so I make myself clearer. This is how I create a user in 10g XE: Roles: [X] CONNECT [X] RESOURCE [ ] DBA Direct Asignment System Privileges: [ ] CREATE DATABASE LINK [ ] CREATE MATERIALIZED VIEW [ ] CREATE PROCEDURE [ ] CREATE PUBLIC SYNONYM [ ] CREATE ROLE [ ] CREATE SEQUENCE [ ] CREATE SYNONYM [ ] CREATE TABLE [ ] CREATE TRIGGER [ ] CREATE TYPE [ ] CREATE VIEW I click on Enable All and I'm done. This is what I'm asked when doing the same in 11g: Profile: (*) DEFAULT ( ) WKSYS_PROF ( ) MONITORING_PROFILE Roles: CONNECT: [ ] Admin option [X] Default value Edit List: AQ_ADMINISTRATOR_ROLE AQ_USER_ROLE AUTHENTICATEDUSER CSW_USR_ROLE CTXAPP CWM_USER DATAPUMP_EXP_FULL_DATABASE DATAPUMP_IMP_FULL_DATABASE DBA DELETE_CATALOG_ROLE EJBCLIENT EXECUTE_CATALOG_ROLE EXP_FULL_DATABASE GATHER_SYSTEM_STATISTICS GLOBAL_AQ_USER_ROLE HS_ADMIN_ROLE IMP_FULL_DATABASE JAVADEBUGPRIV JAVAIDPRIV JAVASYSPRIV JAVAUSERPRIV JAVA_ADMIN JAVA_DEPLOY JMXSERVER LOGSTDBY_ADMINISTRATOR MGMT_USER OEM_ADVISOR OEM_MONITOR OLAPI_TRACE_USER OLAP_DBA OLAP_USER OLAP_XS_ADMIN ORDADMIN OWB$CLIENT OWB_DESIGNCENTER_VIEW OWB_USER RECOVERY_CATALOG_OWNER RESOURCE SCHEDULER_ADMIN SELECT_CATALOG_ROLE SPATIAL_CSW_ADMIN SPATIAL_WFS_ADMIN WFS_USR_ROLE WKUSER WM_ADMIN_ROLE XDBADMIN XDB_SET_INVOKER XDB_WEBSERVICES XDB_WEBSERVICES_OVER_HTTP XDB_WEBSERVICES_WITH_PUBLIC System Privileges: <Empty> Edit List: ACCESS_ANY_WORKSPACE ADMINISTER ANY SQL TUNING SET ADMINISTER DATABASE TRIGGER ADMINISTER RESOURCE MANAGER ADMINISTER SQL MANAGEMENT OBJECT ADMINISTER SQL TUNING SET ADVISOR ALTER ANY ASSEMBLY ALTER ANY CLUSTER ALTER ANY CUBE ALTER ANY CUBE DIMENSION ALTER ANY DIMENSION ALTER ANY EDITION ALTER ANY EVALUATION CONTEXT ALTER ANY INDEX ALTER ANY INDEXTYPE ALTER ANY LIBRARY ALTER ANY MATERIALIZED VIEW ALTER ANY MINING MODEL ALTER ANY OPERATOR ALTER ANY OUTLINE ALTER ANY PROCEDURE ALTER ANY ROLE ALTER ANY RULE ALTER ANY RULE SET ALTER ANY SEQUENCE ALTER ANY SQL PROFILE ALTER ANY TABLE ALTER ANY TRIGGER ALTER ANY TYPE ALTER DATABASE ALTER PROFILE ALTER RESOURCE COST ALTER ROLLBACK SEGMENT ALTER SESSION ALTER SYSTEM ALTER TABLESPACE ALTER USER ANALYZE ANY ANALYZE ANY DICTIONARY AUDIT ANY AUDIT SYSTEM BACKUP ANY TABLE BECOME USER CHANGE NOTIFICATION COMMENT ANY MINING MODEL COMMENT ANY TABLE CREATE ANY ASSEMBLY CREATE ANY CLUSTER CREATE ANY CONTEXT CREATE ANY CUBE CREATE ANY CUBE BUILD PROCESS CREATE ANY CUBE DIMENSION CREATE ANY DIMENSION CREATE ANY DIRECTORY CREATE ANY EDITION CREATE ANY EVALUATION CONTEXT CREATE ANY INDEX CREATE ANY INDEXTYPE CREATE ANY JOB CREATE ANY LIBRARY CREATE ANY MATERIALIZED VIEW CREATE ANY MEASURE FOLDER CREATE ANY MINING MODEL CREATE ANY OPERATOR CREATE ANY OUTLINE CREATE ANY PROCEDURE CREATE ANY RULE CREATE ANY RULE SET CREATE ANY SEQUENCE CREATE ANY SQL PROFILE CREATE ANY SYNONYM CREATE ANY TABLE CREATE ANY TRIGGER CREATE ANY TYPE CREATE ANY VIEW CREATE ASSEMBLY CREATE CLUSTER CREATE CUBE CREATE CUBE BUILD PROCESS CREATE CUBE DIMENSION CREATE DATABASE LINK CREATE DIMENSION CREATE EVALUATION CONTEXT CREATE EXTERNAL JOB CREATE INDEXTYPE CREATE JOB CREATE LIBRARY CREATE MATERIALIZED VIEW CREATE MEASURE FOLDER CREATE MINING MODEL CREATE OPERATOR CREATE PROCEDURE CREATE PROFILE CREATE PUBLIC DATABASE LINK CREATE PUBLIC SYNONYM CREATE ROLE CREATE ROLLBACK SEGMENT CREATE RULE CREATE RULE SET CREATE SEQUENCE CREATE SESSION CREATE SYNONYM CREATE TABLE CREATE TABLESPACE CREATE TRIGGER CREATE TYPE CREATE USER CREATE VIEW CREATE_ANY_WORKSPACE DEBUG ANY PROCEDURE DEBUG CONNECT SESSION DELETE ANY CUBE DIMENSION DELETE ANY MEASURE FOLDER DELETE ANY TABLE DEQUEUE ANY QUEUE DROP ANY ASSEMBLY DROP ANY CLUSTER DROP ANY CONTEXT DROP ANY CUBE DROP ANY CUBE BUILD PROCESS DROP ANY CUBE DIMENSION DROP ANY DIMENSION DROP ANY DIRECTORY DROP ANY EDITION DROP ANY EVALUATION CONTEXT DROP ANY INDEX DROP ANY INDEXTYPE DROP ANY LIBRARY DROP ANY MATERIALIZED VIEW DROP ANY MEASURE FOLDER DROP ANY MINING MODEL DROP ANY OPERATOR DROP ANY OUTLINE DROP ANY PROCEDURE DROP ANY ROLE DROP ANY RULE DROP ANY RULE SET DROP ANY SEQUENCE DROP ANY SQL PROFILE DROP ANY SYNONYM DROP ANY TABLE DROP ANY TRIGGER DROP ANY TYPE DROP ANY VIEW DROP PROFILE DROP PUBLIC DATABASE LINK DROP PUBLIC SYNONYM DROP ROLLBACK SEGMENT DROP TABLESPACE DROP USER ENQUEUE ANY QUEUE EXECUTE ANY ASSEMBLY EXECUTE ANY CLASS EXECUTE ANY EVALUATION CONTEXT EXECUTE ANY INDEXTYPE EXECUTE ANY LIBRARY EXECUTE ANY OPERATOR EXECUTE ANY PROCEDURE EXECUTE ANY PROGRAM EXECUTE ANY RULE EXECUTE ANY RULE SET EXECUTE ANY TYPE EXECUTE ASSEMBLY EXPORT FULL DATABASE FLASHBACK ANY TABLE FLASHBACK ARCHIVE ADMINISTER FORCE ANY TRANSACTION FORCE TRANSACTION FREEZE_ANY_WORKSPACE GLOBAL QUERY REWRITE GRANT ANY OBJECT PRIVILEGE GRANT ANY PRIVILEGE GRANT ANY ROLE IMPORT FULL DATABASE INSERT ANY CUBE DIMENSION INSERT ANY MEASURE FOLDER INSERT ANY TABLE LOCK ANY TABLE MANAGE ANY FILE GROUP MANAGE ANY QUEUE MANAGE FILE GROUP MANAGE SCHEDULER MANAGE TABLESPACE MERGE ANY VIEW MERGE_ANY_WORKSPACE ON COMMIT REFRESH QUERY REWRITE READ ANY FILE GROUP REMOVE_ANY_WORKSPACE RESTRICTED SESSION RESUMABLE ROLLBACK_ANY_WORKSPACE SELECT ANY CUBE SELECT ANY CUBE DIMENSION SELECT ANY DICTIONARY SELECT ANY MINING MODEL SELECT ANY SEQUENCE SELECT ANY TABLE SELECT ANY TRANSACTION UNDER ANY TABLE UNDER ANY TYPE UNDER ANY VIEW UNLIMITED TABLESPACE UPDATE ANY CUBE UPDATE ANY CUBE BUILD PROCESS UPDATE ANY CUBE DIMENSION UPDATE ANY TABLE Object Privileges: <Empty> Add: Clase Java Clases de Trabajos Cola Columna de Tabla Columna de Vista Espacio de Trabajo Función Instantánea Origen Java Paquete Planificaciones Procedimiento Programas Secuencia Sinónimo Tabla Tipos Trabajos Vista Consumer Group Privileges: <Empty> Default Consumer Group: (*) None Edit List: AUTO_TASK_CONSUMER_GROUP BATCH_GROUP DEFAULT_CONSUMER_GROUP INTERACTIVE_GROUP LOW_GROUP ORA$AUTOTASK_HEALTH_GROUP ORA$AUTOTASK_MEDIUM_GROUP ORA$AUTOTASK_SPACE_GROUP ORA$AUTOTASK_SQL_GROUP ORA$AUTOTASK_STATS_GROUP ORA$AUTOTASK_URGENT_GROUP ORA$DIAGNOSTICS SYS_GROUP And, of course, I wonder what options I should pick.

    Read the article

  • /etc/environment not being read into PATH variable

    - by Dan
    In Ubuntu the path variable is stored in /etc/environment. This is mine (I've made no changes to it, this is the system default): $ cat /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" but when I examine my PATH variable: $ echo $PATH /home/dan/bin:/home/dan/bin:/bin:/usr/bin:/usr/local/bin:/usr/bin/X11 You'll notice /usr/games is missing (it was there up until a few days ago). My /etc/profile makes no mention of PATH. My ~/.profile is the default and only has: if [ -d "$HOME/bin" ] ; then PATH="$HOME/bin:$PATH" fi This only happens in gnome, not in tty1-6. This is missing from both gnome terminal and when I try to call applications from the applications dropdown. Anyone know what could be causing this? Thanks.

    Read the article

  • How to get headphones and speakers working at the same time?

    - by Borek
    I have a Gigabyte P55-UD3 motherboard which comes with a Realtek sound card and audio header for the front panel. My headset is permanently connected to the front panel (mic in, headphones in) and I also have speakers connected at the rear of the computer. What I want to achieve is that both the headphones and the speakers work at the same time. On my old PC with Realtek, I was able to do this by clicking "Device advanced settings" in the Realtek HD Audio Manager and then clicking "Make front and rear output devices playback two different audio streams simultaneously" and then doing some reassigning of front/rear settings in the Realtek Audio Manager (can't remember the details). But that doesn't seem to be working now - either the speakers play sounds or the headphones, not both at the same time. To pose the question differently, Windows plays the sounds on the "default device". I'd like to make both headphones and the speakers kind of like the default devices.

    Read the article

  • How Exactly Is One Linux OS “Based On” Another Linux OS?

    - by Jason Fitzpatrick
    When reviewing different flavors of Linux, you’ll frequently come across phrases like “Ubuntu is based on Debian” but what exactly does that mean? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader PLPiper is trying to get a handle on how Linux variants work: I’ve been looking through quite a number of Linux distros recently to get an idea of what’s around, and one phrase that keeps coming up is that “[this OS] is based on [another OS]“. For example: Fedora is based on Red Hat Ubuntu is based on Debian Linux Mint is based on Ubuntu For someone coming from a Mac environment I understand how “OS X is based on Darwin”, however when I look at Linux Distros, I find myself asking “Aren’t they all based on Linux..?” In this context, what exactly does it mean for one Linux OS to be based on another Linux OS? So, what exactly does it mean when we talk about one version of Linux being based off another version? The Answer SuperUser contributor kostix offers a solid overview of the whole system: Linux is a kernel — a (complex) piece of software which works with the hardware and exports a certain Application Programming Interface (API), and binary conventions on how to precisely use it (Application Binary Interface, ABI) available to the “user-space” applications. Debian, RedHat and others are operating systems — complete software environments which consist of the kernel and a set of user-space programs which make the computer useful as they perform sensible tasks (sending/receiving mail, allowing you to browse the Internet, driving a robot etc). Now each such OS, while providing mostly the same software (there are not so many free mail server programs or Internet browsers or desktop environments, for example) differ in approaches to do this and also in their stated goals and release cycles. Quite typically these OSes are called “distributions”. This is, IMO, a somewhat wrong term stemming from the fact you’re technically able to build all the required software by hand and install it on a target machine, so these OSes distribute the packaged software so you either don’t need to build it (Debian, RedHat) or they facilitate such building (Gentoo). They also usually provide an installer which helps to install the OS onto a target machine. Making and supporting an OS is a very complicated task requiring a complex and intricate infrastructure (upload queues, build servers, a bug tracker, and archive servers, mailing list software etc etc etc) and staff. This obviously raises a high barrier for creating a new, from-scratch OS. For instance, Debian provides ca. 37k packages for some five hardware architectures — go figure how much work is put into supporting this stuff. Still, if someone thinks they need to create a new OS for whatever reason, it may be a good idea to use an existing foundation to build on. And this is exactly where OSes based on other OSes come into existence. For instance, Ubuntu builds upon Debian by just importing most packages from it and repackaging only a small subset of them, plus packaging their own, providing their own artwork, default settings, documentation etc. Note that there are variations to this “based on” thing. For instance, Debian fosters the creation of “pure blends” of itself: distributions which use Debian rather directly, and just add a bunch of packages and other stuff only useful for rather small groups of users such as those working in education or medicine or music industry etc. Another twist is that not all these OSes are based on Linux. For instance, Debian also provide FreeBSD and Hurd kernels. They have quite tiny user groups but anyway. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Redmine install not working and displaying directory contents - Ubuntu 10.04

    - by Casey Flynn
    I've gone through the steps to set up and install the redmine project tracking web app on my VPS with Apache2 but I'm running into a situation where instead of displaying the redmine app, I just see the directory contents: Does anyone know what could be the problem? I'm not sure what other files might be of use to diagnose what's going on. Thanks! # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ # Enable fastcgi for .fcgi files # (If you're using a distro package for mod_fcgi, something like # this is probably already present) #<IfModule mod_fcgid.c> # AddHandler fastcgi-script .fcgi # FastCgiIpcDir /var/lib/apache2/fastcgi #</IfModule> LoadModule fcgid_module /usr/lib/apache2/modules/mod_fcgid.so LoadModule passenger_module /var/lib/gems/1.8/gems/passenger-3.0.7/ext/apache2/mod_passenger.so PassengerRoot /var/lib/gems/1.8/gems/passenger-3.0.7 PassengerRuby /usr/bin/ruby1.8 ServerName demo and my vhosts file #No DNS server, default ip address v-host #domain: none #public: /home/casey/public_html/app/ <VirtualHost *:80> ServerAdmin webmaster@localhost # ScriptAlias /redmine /home/casey/public_html/app/redmine/dispatch.fcgi DirectoryIndex index.html DocumentRoot /home/casey/public_html/app/public <Directory "/home/casey/trac/htdocs"> Order allow,deny Allow from all </Directory> <Directory /var/www/redmine> RailsBaseURI /redmine PassengerResolveSymlinksInDocumentRoot on </Directory> # <Directory /> # Options FollowSymLinks # AllowOverride None # </Directory> # <Directory /var/www/> # Options Indexes FollowSymLinks MultiViews # AllowOverride None # Order allow,deny # allow from all # </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /home/casey/public_html/app/log/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /home/casey/public_html/app/log/access.log combined # Alias /doc/ "/usr/share/doc/" # <Directory "/usr/share/doc/"> # Options Indexes MultiViews FollowSymLinks # AllowOverride None # Order deny,allow # Deny from all # Allow from 127.0.0.0/255.0.0.0 ::1/128 # </Directory> </VirtualHost>

    Read the article

  • Notes on Oracle BPM PS6 Adaptive Case Management

    - by gcolman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} I have recently been looking at the  latest release of the BPM Case Management feature in the Oracle BPM PS6 release. I had put together some notes to help me gain a better understanding of the context of the PS6 BPM Case Management. Hopefully, this along with the other resources will enable you to gain a clear picture of the flexibility of this feature. Oracle BPM PS6 release includes Case Management capability. This initial release aims to provide: Case Management Framework Integration of Case Management with BPM & SOA suite It is best to regard the current PS6 case management feature as a case management framework. The framework provides the building blocks for creating a case management system that is fully integrated into Oracle BPM suite. As of the current PS6 release, no UI tooling exists to help manage cases or the case lifecycle. Mark Foster has written a good blog which outlines Case Management within PS6 in the following link. I wanted to provide more context on Case Management from my perspective in this blog. PS6 Case Management - High level View BPM PS6 includes “Case” as a first class component in a SOA Suite composite. The Case components (added to the SOA Composite) are created when a BPM process is assigned to a case in JDveloper. The SOA Case component is defined and configured within JDevloper, which allows us to specify the case data structures and metadata such as stakeholders, outcomes, milestones, document stores etc. "Activities" are associated with a case, and become available to be executed via the case apis. Activities are BPM processes, Human Activities or Java call outs. The PS6 release includes some additional database tables to store the case metadata and case instance data (data object, comments, etc…). These new tables are created within the SOA_INFRA schema and the documents associated with that case into a document repository that is configured with the case. One of the main features of Case Management is the control of the case logic through case events and case business rules. A PS6 Case has an associated business rule component, which can be configured to control the availability and execution of activities within the case. The business rules component is able to act upon events that the PS6 Case Management framework generates during the lifecycle of that case. Events are fired during the lifetime of the case (e.g. Case created, activity started, activity ended, note added, document uploaded.) Internal Case state The internal state of a case is represented by the diagram below. This shows the internal states and the transition paths for a Case from one state to the next Each transition in state will create an event that can be enacted upon via the Case rules engine. The internal case state lifecycle is defined as follows Defining a case A Case is created and defined as a component of a JDeveloper BPM project. When you create a Case as part of a BPM project, JDeveloper, creates the following components within the SCA composite: Case component Case component interfaces (WSDL etc) Case Rules component (Oracle Business Rules) Adds the Case Component and Case Rules Component to the BPM SOA composite Case Configuration The following section gives a high level overview of the items that can be configured for a BPM Case. Case Activities A Case is associated with a set of activities that are to be performed as part of that Case. Case activities can be: SOA Human Tasks BPM processes Custom Task (Java Class) Case activities are created from pre-existing BPM process or human tasks, which, once defined, can be configured additionally as Case activities in JDeveloper and made available within the lifecycle of a case. I've described the following configurable components of a case (very!) briefly as: Milestones Milestones are (optional) user defined logical milestones that can be achieved within a case. No activities are associates with a milestone, but milestone attainment can be programmatically set and events raised when milestones are reached Outcomes User defined status of a completed case. An event is fired when an outcome is attained. Case Data Defines the data that will be stored with a case XML schemas define the data that is stored with the case. Case Documents Defines the location of documents that are attached to a case (e.g. WebCenter Content) User Defined Events Optional user defined events that can be fired or captured to drive case processing rules Stakeholders Defines the actors who can participate in the case (roles, users, groups) Defines permissions for individual case permissions (read case, create document etc…) Business Rules Business rules are the main component controlling the flow of a Case Each case has an associated business ruleset Rules are fired on receiving Case events (or User defined events) Life cycle events Milestone events Activity events Data events Document events Comment events User event Managing the Case Managing the lifecycle of a case is achieved in two ways: Managing case logic with Business Rules Managing the case lifecycle via the Case APIs. A BPM Case can be viewed as a set of case data & documents along with the activities that can be performed within a case and also the case lifecycle state expressed as milestones and internal lifecycle state. The management of the case life is achieved though both the configuration of business rules and the “manual” interaction with a case instance through the Case APIs. Business Rules and Case Events A key component within the Case management framework is the event model. The BPM Case Management solution internally utilizes Oracle EDN (Event Delivery Network) to publish and subscribe to events generated by the Case framework. Events are generated by the Case framework on each of the processes and stages that a case instance will travel on its lifetime. The following case events are part of the BPM Case: Life cycle events Milestone events Activity events Data events Document events Comment events User event The Case business rules are configured to listen for these events, and business logic can be coded into the Case rules component to enact upon an event being received. Case API & Interaction Along with the business rules component, Cases can be managed via the Case API interfaces. These interfaces allow for the building of custom applications to integrate into case management framework. The API’s allow for updating case comments & documents, executing case activities, updating milestones etc. As there is no in built case management UI functions within the PS6 release, Cases need to be managed via a custom built UI, interacting with selected case instances, launching case activities, closing cases etc. (There is expected to be a UI component within subsequent releases) Logical Case Flow The diagram below is intended to depict a logical view of the case steps for a typical case. A UI or other service calls the Case interface to create a Case instance The case instance is created & database data inserted A lifecycle event is raised indicating a case activity (created) event The case business rules capture the event and decide on an action to take Additionally other parties can subscribe to Case events via EDN The business rules may handle the event, e.g. configured to execute a case activity on case creation event The BPM/Human Workflow/Custom activity is executed A case activity event is raised on the execute activity A case work UI or business service can inspect the case instance and call other actions to progress that case, such as: Execute activity Add Note Add document Add case data Update Milestone Raise user defined event Suspend case Resume case Close Case Summary Having had a little time to play around with the APIs and the case configuration, I really like the flexibility and power of combining Oracle Business Rules and the BPM Case Management event model. Creating something this flexible and powerful without BPM Case Management would take a lot of time and effort. This is hopefully going to save my customers a lot of time and effort! I may make amendments to this post as my understanding of Case Management increases! Take a look at the following links for official documentation etc. http://docs.oracle.com/cd/E28280_01/doc.1111/e15176/case_mgmt_bpmpd.htm https://blogs.oracle.com/bpm/entry/just_in_case Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Why am I getting this "Connection to PulseAudio failed" error?

    - by Dave M G
    I have a computer that runs Mythbuntu 11.10. It has an external USB Kenwood Digital Audio device. When I open up pavucontrol, I get this message: If I do as the message suggests and run start-pulseaudio-x11, I get this output: $ start-pulseaudio-x11 Connection failure: Connection refused pa_context_connect() failed: Connection refused How do I correct this error? Update: Somewhere during the course of doing the suggested tests in the comments, a new audio device has now become visible in my sound settings. I have not attached or made any new device, so this must be the result of of some setting change. The device I use and know about is the Kenwood Audio device. The "GF108" device will play sound through the Kenwood anyway, but not reliably: Command line output as requested in the comments: $ ls -l ~/.pulse* -rw------- 1 mythbuntu mythbuntu 256 Feb 28 2011 /home/mythbuntu/.pulse-cookie /home/mythbuntu/.pulse: total 200 -rw-r--r-- 1 mythbuntu mythbuntu 8192 Oct 23 01:38 2b98330d36bf53bb85c97fc300000008-card-database.tdb -rw-r--r-- 1 mythbuntu mythbuntu 69 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-sink -rw-r--r-- 1 mythbuntu mythbuntu 68 Nov 16 22:51 2b98330d36bf53bb85c97fc300000008-default-source -rw-r--r-- 1 mythbuntu mythbuntu 49152 Oct 14 12:30 2b98330d36bf53bb85c97fc300000008-device-manager.tdb -rw-r--r-- 1 mythbuntu mythbuntu 61440 Oct 23 01:40 2b98330d36bf53bb85c97fc300000008-device-volumes.tdb lrwxrwxrwx 1 mythbuntu mythbuntu 23 Nov 16 22:50 2b98330d36bf53bb85c97fc300000008-runtime -> /tmp/pulse-EAwvLIQZn7e8 -rw-r--r-- 1 mythbuntu mythbuntu 77824 Nov 1 12:54 2b98330d36bf53bb85c97fc300000008-stream-volumes.tdb And yet more requested command line output: $ ps auxw|grep pulse 1000 2266 0.5 0.2 294184 9152 ? S<l Nov16 4:26 pulseaudio -D 1000 2413 0.0 0.0 94816 3040 ? S Nov16 0:00 /usr/lib/pulseaudio/pulse/gconf-helper 1000 4875 0.0 0.0 8108 908 pts/0 S+ 12:15 0:00 grep --color=auto pulse

    Read the article

  • pfSense 2.1 OpenVPN client not using tunnelled interface

    - by Brian M. Hunt
    I'm having some trouble getting OpenVPN working on my pfSense box. The issue is quite strange to me. When I have the OpenVPN turned on, only my router is able to connect to the Internet. From the router I can use ping, links, etc., and connections work exactly as expected - through the VPN, with the IP address assigned by my VPN provider (Proxy.sh, incidentally). However, none of the clients on the local network can connect to the Internet. I get timeouts when using ping or a web browser. I can ping my router, and the IP address of the gateway. When I switch the default gateway from the VPN to my ISP's gateway, all works exactly as expected. Here the routing table (netstat -r) when in VPN mode, and a key for it: IPv4 Destination Gateway Flags Refs Use Mtu Netif Expire 0.0.0.0/1 10.XX.X.53 UGS 0 122 1500 ovpnc1 = default 10.XX.X.53 UGS 0 235 1500 ovpnc1 8.8.8.8 10.XX.X.53 UGHS 0 82 1500 ovpnc1 10.XX.X.1/32 10.11.0.53 UGS 0 0 1500 ovpnc1 10.XX.X.53 link#12 UH 0 0 1500 ovpnc1 10.XX.X.54 link#12 UHS 0 0 16384 lo0 ZZ.XX.XXX.0/20 link#1 U 0 83 1500 re0 ZZ.XX.XXX.XXX link#1 UHS 0 0 16384 lo0 127.0.0.1 link#9 UH 0 12 16384 lo0 128.0.0.0/1 10.11.0.53 UGS 0 123 1500 ovpnc1 192.168.1.0/24 link#11 U 0 1434 1500 ue0 192.168.1.1 link#11 UHS 0 0 16384 lo0 YYY.YYY.YYY.YYY/32 ZZ.XX.XXX.1 UGS 0 249 1500 re0 IP addresses 10.XX.X.53/54 - My DHCP-assigned IP address/pair from the VPN provider ZZ.XX.XXX.XXX - My external IP assigned by my ISP YYY.YYY.YYY.YYY - The external IP assigned by the VPN provider Interfaces ovpnc1 - My VPN client interface re0 - My LAN interface ue0 - My WAN interface This looks essentially what I would expect it to be. The default route is through the VPN provider. The VPN address is routed through the ISP-assigned IP address. I am not sure what would be wrong here. So figuring this was a firewall issue, I basically tried enabling all in/out traffic. This did not seem to remedy the problem. Also figuring it could possibly be some client networking issue, I restarted the clients on the LAN. This did not help. I also ran route flush and reset the routes manually. So I am a bit stumped, and would be very grateful for any thoughts on what the problem might be.

    Read the article

  • Is there a way to change the GTK theme for applications run as superuser on KDE?

    - by Patches
    When I run GTK applications on KDE, they use the QtCurve theme that matches my color and font scheme as configured in the KDE System Settings application. However GTK applications run as superuser use the old default GNOME, regardless of whether I run them with kdesudo, gksudo, or sudo on a terminal. For example, here's gedit run as superuser on top, and under my normal user account on the bottom: Strangely, KDE applications run with kdesudo display the default Oxygen styling but use my settings when run with sudo on a terminal. Is there any way to configure the stying GTK applications use when run as superuser on KDE?

    Read the article

  • Using DNS in iproute2

    - by Oliver
    In my setup I can redirect the default gateway based on the source address. Let's say a user is connected through tun0 (10.2.0.0/16) is redirect to another vpn. That works fine! ip rule add from 10.2.0.10 lookup vpn1 In a second rule I redirect the default gateway to another gateway if the user access a certain ip adress: ip rule add from 10.2.0.10 to 94.142.154.71 lookup vpn2 If I access the page on 94.142.154.71 (myip.is) the user is correctly routed and I can see the ip of the second vpn. On any other pages the ip address of vpn1 is shown. But how do I tell iproute2 that all request at e. g. google.com should be redirected through vpn2?

    Read the article

  • script to recursively check for and select dependencies

    - by rp.sullivan
    I have written a script that does this but it is one of my first scripts ever so i am sure there is a better way:) Let me know how you would go about doing this. I'm looking for a simple yet efficient way to do this. Here is some important background info: ( It might be a little confusing but hopefully by the end it will make sense. ) 1) This image shows the structure/location of the relevant dirs and files. 2) The packages.file located at ./config/default/config/packages is a space delimited file. field5 is the "package name" which i will call $a for explanations sake. field4 is the name of the dir containing the $a.dir i will call $b field1 shows if the package is selected or not, "X"(capital x) for selected and "O"(capital o as in orange) for not selected. Here is an example of what the packages.file might contain: ... X ---3------ 104.800 database gdbm 1.8.3 / base/library CROSS 0 O -1---5---- 105.000 base libiconv 1.13.1 / base/tool CROSS 0 X 01---5---- 105.000 base pkgconfig 0.25 / base/tool CROSS 0 X -1-3------ 105.000 base texinfo 4.13a / base/tool CROSS DIETLIBC 0 O -----5---- 105.000 develop duma 2_5_15 / base/development CROSS NOPARALLEL 0 O -----5---- 105.000 develop electricfence 2_4_13 / base/development CROSS 0 O -----5---- 105.000 develop gnupth 2.0.7 / extra/development CROSS NOPARALLEL FPIC-QUIRK 0 ... 3) For almost every package listed in the "packages.file" there is a corresponding ".cache file" The .cache file for package $a would be located at ./package/$b/$a/$a.cache The .cache files contain a list of dependencies for that particular package. Here is an example of one of the .cache files might look like. Note that the dependencies are field2 of lines containing "[DEP]" These dependencies are all names of packages in the "package.file" [TIMESTAMP] 1134178701 Sat Dec 10 02:38:21 2005 [BUILDTIME] 295 (9) [SIZE] 11.64 MB, 191 files [DEP] 00-dirtree [DEP] bash [DEP] binutils [DEP] bzip2 [DEP] cf [DEP] coreutils ... So with all that in mind... I'm looking for a shell script that: From within the "main dir" Looks at the ./config/default/config/packages file and finds the "selected" packages and reads the corresponding .cache Then compiles a list of dependencies that excludes the already selected packages Then selects the dependencies (by changing field1 to X) in the ./config/default/config/packages file and repeats until all the dependencies are met Note: The script will ultimately end up in the "scripts dir" and be called from the "main dir". If this is not clear let me know what need clarification. For those interested I'm playing around with T2 SDE. If you are into playing around with linux it might be worth taking a look.

    Read the article

  • xp vpn client dns issue

    - by David Archer
    Hi All, I have a problem with dns when connected to my work vpn. For ease of explanation I'll use the following in my outline of the problem: - name of my machine on work network is REMOTE_XP (original i know) - ip of my machine on work network is 192.168.2.80 - name of my machine on my local network is LOCAL_XP - ip of my machine on my local network is 10.0.0.3 What I want to be able to do when connected to vpn: - browse the internet from LOCAL_XP - ping by name REMOTE_XP Now it seems I've so far mentioned either 1 but not both of my wishlist. If i go to my vpn network properties (on LOCAL_XP) and uncheck the "use default dns on remote network" then I can browse the internet from my local machine but can't ping REMOTE_XP (though I can ping 192.168.2.80) If I check "use default dns..." then I can ping REMOTE_XP but can't browse the internet from LOCAL_XP. Is there a way I can have my dns cake and eat it, or will I have to accept that it will be an either/or situation? Thanks in advance.

    Read the article

  • sound stops working after a while in ubuntu 12.10

    - by clio
    i did a clean install on ubuntu 12.10. Everything seemed fine at first including sound but after a while it stopped working. To get is back i have to restart it, sometimes even more than once. Any idea on how to fix this? sudo aplay -L default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC665 Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers iec958:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Digital IEC958 (S/PDIF) Digital Audio Output hdmi:CARD=PCH,DEV=0 HDA Intel PCH, HDMI 0 HDMI Audio Output dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample mixing device dmix:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample mixing device dmix:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct sample snooping device dsnoop:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct sample snooping device dsnoop:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Direct hardware device without any conversions hw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Direct hardware device without any conversions hw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC665 Analog Hardware device with all software conversions plughw:CARD=PCH,DEV=1 HDA Intel PCH, ALC665 Digital Hardware device with all software conversions plughw:CARD=PCH,DEV=3 HDA Intel PCH, HDMI 0 Hardware device with all software conversions

    Read the article

  • Search predictions broken in Chrome - reinstall didn't solve the problem

    - by Shimmy
    I recently changed the default search engine to a custom google search URL (using baseUrl) with some additional parameters and removed all the rest of the search engines, and since then, the search predictions stopped working. I even tried to reinstall Chrome but as soon as I resync, the problem is back! Search predictions are just gone without option to fix!! In IE changing the search provider allows specifying a prediction (suggestion) provider, In chrome, once you change the default search engine, you'll never be able to have predictions again!! This is a terrible bug, I mean WTF!!! Is there any workaround to that? I posted a bug report a while ago but it seems no one looks at it. I'm about to give up on Chrome and go back to IE, the only good thing about Chrome is the Extension market and the AdBlocker (which I can find in IE as well). The perfrormance changes don't matter to me too much. Thanks

    Read the article

  • Install updates without breaking shell theme?

    - by Niko
    Recently I installed a copy of Oneiric Ocelot 11.10 on my ThinkPad. Everything went fine, I installed the gnome shell and gnome-tweak-tool to customize everything. I changed the shell theme, the GTK+ theme and the icon set. After everything fit my needs perfectly, I installed some updates from the update manager (no upgrades, just small updates). I had to restart, and after I restarted, my gnome shell was broken. In tweak-tool, it showed the customized shell as the default one, and my gtk theme was broken as well (it looked like two themes "frankensteined" together...). The bad thing was - I couldn't get things back into the default settings! So the only thing left was to use the -non broken- unity shell. What can I do to stop these things from happening? (I mean...sure I could avoid the updates, but that would be kind of stupid, too.) I only have these PPAs installed: ferramroberto-gnome3-oneiric.list (and .save), playonlinux.list (and .save) And how can I fix the broken gnome-shell?

    Read the article

  • How to delete Chrome temp data (history, cookies, cache) using command line

    - by Dio Phung
    On Windows 7, I tried running this script but still cannot clear Chrome temp data. Can someone figure out what's wrong with the script? Where do Chrome store history and cache ? Thanks ECHO -------------------------------------- ECHO **** Clearing Chrome cache taskkill /F /IM "chrome.exe">nul 2>&1 set ChromeDataDir=C:\Users\%USERNAME%\AppData\Local\Google\Chrome\User Data\Default set ChromeCache=%ChromeDataDir%\Cache>nul 2>&1 del /q /s /f "%ChromeCache%\*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*Cookies*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*History*.*">nul 2>&1 set ChromeDataDir=C:\Users\%USERNAME%\Local Settings\Application Data\Google\Chrome\User Data\Default set ChromeCache=%ChromeDataDir%\Cache>nul 2>&1 del /q /s /f "%ChromeCache%\*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*Cookies*.*">nul 2>&1 del /q /f "%ChromeDataDir%\*History*.*">nul 2>&1 ECHO **** Clearing Chrome cache DONE

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >