Search Results

Search found 19635 results on 786 pages for 'materialized path pattern'.

Page 516/786 | < Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >

  • Some Mac applications crash frequently, with "__THE_SYSTEM_HAS_NO_PORT_SETS_AVAILABLE__" in backtrace

    - by smokris
    Recently I'm finding that some Mac OS X applications are crashing frequently with backtraces like the following: Process: Mail [39226] Path: /Applications/Mail.app/Contents/MacOS/Mail Identifier: com.apple.mail Version: 4.4 (1082) Build Info: Mail-10820000~1 Code Type: X86-64 (Native) Parent Process: launchd [338] Date/Time: 2011-01-12 21:59:48.383 -0500 OS Version: Mac OS X 10.6.6 (10J567) Report Version: 6 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 12 Dispatch queue: com.apple.root.default-priority [...] Thread 12 Crashed: Dispatch queue: com.apple.root.default-priority 0 com.apple.CoreFoundation 0x00007fff81653975 __THE_SYSTEM_HAS_NO_PORT_SETS_AVAILABLE__ + 5 1 com.apple.CoreFoundation 0x00007fff815a1f69 __CFRunLoopFindMode + 553 2 com.apple.CoreFoundation 0x00007fff815a1c5d __CFRunLoopCreate + 317 3 com.apple.CoreFoundation 0x00007fff815a1a78 _CFRunLoopGet0 + 744 4 com.apple.CoreFoundation 0x00007fff815dc90a CFRunLoopRunInMode + 58 5 com.apple.MessageFramework 0x00007fff81d01183 +[NSRunLoop(MessageExtensions) _flushQueuedEventsAddingSource:] + 120 6 com.apple.MessageFramework 0x00007fff81d010d2 +[NSRunLoop(MessageExtensions) flushQueuedEvents] + 36 7 com.apple.MessageFramework 0x00007fff81ce6515 -[_MFInvocationOperation main] + 275 8 com.apple.Foundation 0x00007fff83921de4 -[__NSOperationInternal start] + 681 9 com.apple.Foundation 0x00007fff83a00beb __doStart2 + 97 10 libSystem.B.dylib 0x00007fff801402c4 _dispatch_call_block_and_release + 15 11 libSystem.B.dylib 0x00007fff8011e831 _dispatch_worker_thread2 + 239 12 libSystem.B.dylib 0x00007fff8011e168 _pthread_wqthread + 353 13 libSystem.B.dylib 0x00007fff8011e005 start_wqthread + 13 [...] Any ideas about what might be causing this and/or how to remedy it?

    Read the article

  • What's New in Database Lifecycle Management in Enterprise Manager 12c Release 3

    - by HariSrinivasan
    Enterprise Manager 12c Release 3 includes improvements and enhancements across every area of the product. This blog provides an overview of the new and enhanced features in the Database Lifecycle Management area. I will deep dive into specific features more in depth in subsequent posts. "What's New?"  In this release, we focused on four things: 1. Lifecycle Management Support for new Database12c - Pluggable Databases 2. Management of long running processes, such as a security patch cycle (Change Activity Planner) 3. Management of large number of systems by · Leveraging new framework capabilities for lifecycle operations, such as the new advanced ‘emcli’ script option · Refining features such as configuration search and compliance 4. Minor improvements and quality fixes to existing features · Rollback support for Single instance databases · Improved "OFFLINE" Patching experience · Faster collection of ORACLE_HOME configurations Lifecycle Management Support for new Database 12c - Pluggable Databases Database 12c introduces Pluggable Databases (PDBs), the brand new addition to help you achieve your consolidation goals. Pluggable databases offer unprecedented consolidation at database level and native lifecycle verbs for creating, plugging and unplugging the databases on a container database (CDB). Enterprise Manager can supplement the capabilities of pluggable databases by offering workflows for migrating, provisioning and cloning them using the software library and the deployment procedures. For example, Enterprise Manager can migrate an existing database to a PDB or clone a PDB by storing a versioned copy in the software library. One can also manage the planned downtime related to patching by  migrating the PDBs to a new CDB. While pluggable databases offer these exciting features, it can also pose configuration management and compliance challenges if not managed properly. Enterprise Manager features like inventory management, topology associations and configuration search can mitigate the sprawl of PDBs and also lock them to predefined golden standards using configuration comparison and compliance rules. Learn More ... Management of Long Running datacenter processes - Change Activity Planner (CAP) Currently, customers resort to cumbersome methods to create, execute, track and monitor change activities within their data center. Some customers use traditional tools such as spreadsheets, project planners and in-house custom built solutions. Customers often have weekly sync up meetings across stake holders to collect status and updates. Some of the change activities, for example the quarterly patch set update (PSU) patch rollouts are not single tasks but processes with multiple tasks. Some of those tasks are performed within Enterprise Manager Cloud Control (for example Patch) and some are performed outside of Enterprise Manager Cloud Control. These tasks often run for a longer period of time and involve multiple people or teams. Enterprise Manger Cloud Control supports core data center operations such as configuration management, compliance management, and automation. Enterprise Manager Cloud Control release 12.1.0.3 leverages these capabilities and introduces the Change Activity Planner (CAP). CAP provides the ability to plan, execute, and track change activities in real time. It covers the typical datacenter activities that are spread over a long period of time, across multiple people and multiple targets (even target types). Here are some examples of Change Activity Process in a datacenter: · Patching large environments (PSU/CPU Patching cycles) · Upgrading large number of database environments · Rolling out Compliance Rules · Database Consolidation to Exadata environments CAP provides user flows for Compliance Officers/Managers (incl. lead administrators) and Operators (DBAs and admins). Managers can create change activity plans for various projects, allocate resources, targets, and groups affected. Upon activation of the plan, tasks are created and automatically assigned to individual administrators based on target ownership. Administrators (DBAs) can identify their tasks and understand the context, schedules, and priorities. They can complete tasks using Enterprise Manager Cloud Control automation features such as patch plans (or in some cases outside Enterprise Manager). Upon completion, compliance is evaluated for validations and updates the status of the tasks and the plans. Learn More about CAP ...  Improved Configuration & Compliance Management of a large number of systems Improved Configuration Comparison:  Get to the configuration comparison results faster for simple ad-hoc comparisons. When performing a 1 to 1 comparison, Enterprise Manager will perform the comparison immediately and take the user directly to the results without having to wait for a job to be submitted and executed. Flattened system comparisons reduce comparison setup time and reduce complexity. In addition to the previously existing topological comparison, users now have an option to compare using a “flattened” methodology. Flattening means to remove duplicate target instances within the systems and remove the hierarchy of member targets. The result are much easier to spot differences particularly for specific use cases like comparing patch levels between complex systems like RAC and Fusion Apps. Improved Configuration Search & Advanced EMCLI Script option for Mass Automation Enterprise manager 12c introduces a new framework level capability to be able to script and stitch together multiple tasks using EMCLI. This powerful capability can be leveraged for lifecycle operations, especially when executing a task over a large number of targets. Specific usages of this include, retrieving a qualified list of targets using Configuration Search and then using the resultset for automation. Another example would be executing a patching operation and then re-executing on targets where it may have failed. This is complemented by other enhancements, such as a better usability for designing reusable configuration searches. IN EM 12c Rel 3, a simplified UI makes building adhoc searches even easier. Searching for missing patches is a common use of configuration search. This required the use of the advanced options which are now clearly defined and easy to use. Perform “Configuration Search” using the EMCLI. Users can find and execute Configuration Searches from the EMCLI which can be extremely useful for building sophisticated automation scripts. For an example, Run the Search named “Oracle Databases on Exadata” which finds all Database targets running on top of Exadata. Further filter the results by refining by options like name, host, etc.. emcli get_targets -config_search="Databases on Exadata" –target_name="exa%“ Use this in powerful mass automation operations using the new emcli script option. For example, to solve the use case of – Finding all DBs running on Exadata and housing E-Biz and Patch them. Create a Python script with emcli functions and invoke it in the new EMCLI script option shell. Invoke the script in the new EMCLI with script option directly: $<path to emcli>/emcli @myPSU_Patch.py Richer compliance content:  Now over 50 Oracle Provided Compliance Standards including new standards for Pluggable Database, Fusion Applications, Oracle Identity Manager, Oracle VM and Internet Directory. 9 Oracle provided Real Time Monitoring Standards containing over 900 Compliance Rules across 500 Facets. These new Real time Compliance Standards covers both Exadata Compute nodes and Linux servers. The result is increased Oracle software coverage and faster time to compliance monitoring on Exadata. Enhancements to Patch Management: Overhauled "OFFLINE" Patching experience: Simplified Patch uploads UI to improve the offline experience of patching. There is now a single step process to get the patches into software library. Customers often maintain local repositories of patches, sometimes called software depots, where they host the patches downloaded from My Oracle Support. In the past, you had to move these patches to your desktop then upload them to the Enterprise Manager's Software library through the Enterprise Manager Cloud Control user interface. You can now use the following EMCLI command to upload multiple patches directly from a remote location within the data center: $emcli upload_patches -location <Path to Patch directory> -from_host <HOSTNAME> The upload process filters all of the new patches, automatically selects the relevant metadata files from the location, and uploads the patches to software library. Other Improvements:  Patch rollback for single instance databases, new option in the Patch Plan to rollback the patches added to the patch plans. Upon execution, the procedure would rollback the patch and the SQL applied to the single instance Databases. Improved and faster configuration collection of Oracle Home targets can enable more reliable automation at higher level functions like Provisioning, Patching or Database as a Service. Just to recap, here is a list of database lifecycle management features:  * Red highlights mark – New or Enhanced in the Release 3. • Discovery, inventory tracking and reporting • Database provisioning including o Migration to Pluggable databases o Plugging and unplugging of pluggable databases o Gold image based cloning o Scaling of RAC nodes •Schema and data change management •End-to-end patch management in online and offline modes, including o Patch advisories in online (connected with My Oracle Support) and offline mode o Patch pre-deployment analysis, deployment and rollback (currently only for single instance databases) o Reporting • Upgrade planning and execution of the upgrade process • Configuration management including • Compliance management with out-of-box content • Change Activity Planner for planning, designing and tracking long running processes For more information on Enterprise Manager’s database lifecycle management capabilities, visit http://www.oracle.com/technetwork/oem/lifecycle-mgmt/index.html

    Read the article

  • What is the official installer for Unix packages on Mac OS?

    - by dehmann
    I'm a bit confused about the installation of standard Unix packages on Mac OS X. For example, I have /usr/bin/svn, which is SVN v.1.4.4, but FinkCommander says svn is not installed. The same holds for other packages, like emacs etc. Is that just a wrong FinkCommander setting? Currently it is set to install everything in /sw, which is not even in the PATH. So, do I just have to set it to install packages to /usr, and it will recognize the installed software? I don't want to install duplicate packages of everything, and it is quite weird that the FinkCommander seems not to be in sync with the installed software. Or is there any other installer I should be using? Is Mac Ports the recommended installer to use? (I'm using Mac OS 10.5.8.)

    Read the article

  • Run batch on files in a relative folder

    - by Matteo Cuellar Vega
    I have a windows batch script that runs a SOX command on various files, but I don't know how to get the batch to run on files in a relative path to that of the SOX executable. Currently all the files are in the root and it outputs to /combined. The Batch Script: cd %~dp0 mkdir combined FOR %%A IN (*.mp3) DO sox static.mp3 %%A "combined/%%~nxA" pause I want the script to run the sox command on files in the directory "audiotracks" and output it to the directory "combined". To give you an idea, this would be the desired folder structure: /root sox.exe batch.bat static.mp3 /audiotracks audio1.mp3 audio2.mp3 audio3.mp3 audio4.mp3 /combined audio1out.mp3 audio2out.mp3 audio3out.mp3 audio4out.mp3 Is this possible, or is there a better method of doing this? Any help would be greatly appreciated. Thanks a lot!

    Read the article

  • How can I install a headless JDK on an Ubuntu Jaunty server?

    - by Hanno Fietz
    I recently set up a build server that requires a JDK to run (for example, to compile the Java sources). The OpenJDK package in Ubuntu pulls in the OpenJDK JRE as a dependency which, in turn, depends on a large number of packages that are only relevant for graphical environments. For the standard JRE, there's a headless version of the package, but for the JDK, no. This issue has been discussed in various places before, and one solution that I found and used was this: $ apt-get --no-install-recommends -d install openjdk-6-jdk $ dpkg -i --ignore-depends=openjdk-6-jre /path/to/just-downloaded.deb While this worked, it now leaves my system with a broken dependency tree and apt-get refuses further installs untill I run apt-get -f. Is there a better solution to this?

    Read the article

  • Automatically create bug resolution task using the TFS 2010 API

    - by Bob Hardister
    My customer requires bug resolution to be approved and tracked.  To minimize the overhead for developers I implemented a TFS 2010 server-side plug-in to automatically create a child resolution task for the bug when the “CCB” field is set to approved. The CCB field is a custom field.  I also added the story points field to the bug WIT for sizing purposes. Redundant tasks will not be created unless the bug title is changed or the prior task is closed. The program writes an audit trail to a log file visible in the TFS Admin Console Log view. Here’s the code. BugAutoTask.cs /* SPECIFICATION * When the CCB field on the bug is set to approved, create a child task where the task: * name = Resolve bug [ID] - [Title of bug] * assigned to = same as assigned to field on the bug * same area path * same iteration path * activity = Bug Resolution * original estimate = bug points * * The source code is used to build a dll (Ows.TeamFoundation.BugAutoTaskCreation.PlugIns.dll), * which needs to be copied to * C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\Plugins * on ALL TFS application-tier servers. * * Author: Bob Hardister. */ using System; using System.Collections.Generic; using System.IO; using System.Xml; using System.Text; using System.Diagnostics; using System.Linq; using Microsoft.TeamFoundation.Common; using Microsoft.TeamFoundation.Framework.Server; using Microsoft.TeamFoundation.WorkItemTracking.Client; using Microsoft.TeamFoundation.WorkItemTracking.Server; using Microsoft.TeamFoundation.Client; using System.Collections; namespace BugAutoTaskCreation { public class BugAutoTask : ISubscriber { public EventNotificationStatus ProcessEvent(TeamFoundationRequestContext requestContext, NotificationType notificationType, object notificationEventArgs, out int statusCode, out string statusMessage, out ExceptionPropertyCollection properties) { statusCode = 0; properties = null; statusMessage = String.Empty; // Error message for for tracing last code executed and optional fields string lastStep = "No field values found or set "; try { if ((notificationType == NotificationType.Notification) && (notificationEventArgs.GetType() == typeof(WorkItemChangedEvent))) { WorkItemChangedEvent workItemChange = (WorkItemChangedEvent)notificationEventArgs; // see ConnectToTFS() method below to select which TFS instance/collection // to connect to TfsTeamProjectCollection tfs = ConnectToTFS(); WorkItemStore wiStore = tfs.GetService<WorkItemStore>(); lastStep = lastStep + ": connection to TFS successful "; // Get the work item that was just changed by the user. WorkItem witem = wiStore.GetWorkItem(workItemChange.CoreFields.IntegerFields[0].NewValue); lastStep = lastStep + ": retrieved changed work item, ID:" + witem.Id + " "; // Filter for Bug work items only if (witem.Type.Name == "Bug") { // DEBUG lastStep = lastStep + ": changed work item is a bug "; // Filter for CCB (i.e. Baseline Status) field set to approved only bool BaselineStatusChange = false; if (workItemChange.ChangedFields != null) { ProcessBugRevision(ref lastStep, workItemChange, wiStore, ref witem, ref BaselineStatusChange); } } } } catch (Exception e) { Trace.WriteLine(e.Message); Logger log = new Logger(); log.WriteLineToLog(MsgLevel.Error, "Application error: " + lastStep + " - " + e.Message + " - " + e.InnerException); } statusCode = 1; statusMessage = "Bug Auto Task Evaluation Completed"; properties = null; return EventNotificationStatus.ActionApproved; } // PRIVATE METHODS private static void ProcessBugRevision(ref string lastStep, WorkItemChangedEvent workItemChange, WorkItemStore wiStore, ref WorkItem witem, ref bool BaselineStatusChange) { foreach (StringField field in workItemChange.ChangedFields.StringFields) { // DEBUG lastStep = lastStep + ": last changed field is - " + field.Name + " "; if (field.Name == "Baseline Status") { lastStep = lastStep + ": retrieved bug baseline status field value, bug ID:" + witem.Id + " "; BaselineStatusChange = (field.NewValue != field.OldValue); if ((BaselineStatusChange) && (field.NewValue == "Approved")) { // Instanciate logger Logger log = new Logger(); // *** Create resolution task for this bug *** // ******************************************* // Get the team project and selected field values of the bug work item Project teamProject = witem.Project; int bugID = witem.Id; string bugTitle = witem.Fields["System.Title"].Value.ToString(); string bugAssignedTo = witem.Fields["System.AssignedTo"].Value.ToString(); string bugAreaPath = witem.Fields["System.AreaPath"].Value.ToString(); string bugIterationPath = witem.Fields["System.IterationPath"].Value.ToString(); string bugChangedBy = witem.Fields["System.ChangedBy"].OriginalValue.ToString(); string bugTeamProject = witem.Project.Name; lastStep = lastStep + ": all mandatory bug field values found "; // Optional fields Field bugPoints = witem.Fields["Microsoft.VSTS.Scheduling.StoryPoints"]; if (bugPoints.Value != null) { lastStep = lastStep + ": all mandatory and optional bug field values found "; } // Initialize child resolution task title string childTaskTitle = "Resolve bug " + bugID + " - " + bugTitle; // At this point I can check if a resolution task (of the same name) // for the bug already exist // If so, do not create a new resolution task bool createResolutionTask = true; WorkItem parentBug = wiStore.GetWorkItem(bugID); WorkItemLinkCollection links = parentBug.WorkItemLinks; foreach (WorkItemLink wil in links) { if (wil.LinkTypeEnd.Name == "Child") { WorkItem childTask = wiStore.GetWorkItem(wil.TargetId); if ((childTask.Title == childTaskTitle) && (childTask.State != "Closed")) { createResolutionTask = false; log.WriteLineToLog(MsgLevel.Info, "Team project " + bugTeamProject + ": " + bugChangedBy + " - set the CCB field to \"Approved\" for bug, ID: " + bugID + ". Task not created as open one of the same name already exist, ID:" + childTask.Id); } } } if (createResolutionTask) { // Define the work item type of the new work item WorkItemTypeCollection workItemTypes = wiStore.Projects[teamProject.Name].WorkItemTypes; WorkItemType wiType = workItemTypes["Task"]; // Setup the new task and assign field values witem = new WorkItem(wiType); witem.Fields["System.Title"].Value = "Resolve bug " + bugID + " - " + bugTitle; witem.Fields["System.AssignedTo"].Value = bugAssignedTo; witem.Fields["System.AreaPath"].Value = bugAreaPath; witem.Fields["System.IterationPath"].Value = bugIterationPath; witem.Fields["Microsoft.VSTS.Common.Activity"].Value = "Bug Resolution"; lastStep = lastStep + ": all mandatory task field values set "; // Optional fields if (bugPoints.Value != null) { witem.Fields["Microsoft.VSTS.Scheduling.OriginalEstimate"].Value = bugPoints.Value; lastStep = lastStep + ": all mandatory and optional task field values set "; } // Check for validation errors before saving the new task and linking it to the bug ArrayList validationErrors = witem.Validate(); if (validationErrors.Count == 0) { witem.Save(); // Link the new task (child) to the bug (parent) var linkType = wiStore.WorkItemLinkTypes[CoreLinkTypeReferenceNames.Hierarchy]; // Fetch the work items to be linked var parentWorkItem = wiStore.GetWorkItem(bugID); int taskID = witem.Id; var childWorkItem = wiStore.GetWorkItem(taskID); // Add a new link to the parent relating the child and save it parentWorkItem.Links.Add(new WorkItemLink(linkType.ForwardEnd, childWorkItem.Id)); parentWorkItem.Save(); log.WriteLineToLog(MsgLevel.Info, "Team project " + bugTeamProject + ": " + bugChangedBy + " - set the CCB field to \"Approved\" for bug, ID:" + bugID + ", which automatically created child resolution task, ID:" + taskID); } else { log.WriteLineToLog(MsgLevel.Error, "Error in creating bug resolution child task for bug ID:" + bugID); foreach (Field taskField in validationErrors) { log.WriteLineToLog(MsgLevel.Error, " - Validation Error in task field: " + taskField.ReferenceName); } } } } } } } private TfsTeamProjectCollection ConnectToTFS() { // Connect to TFS string tfsUri = string.Empty; // Production TFS instance production collection tfsUri = @"xxxx"; // Production TFS instance admin collection //tfsUri = @"xxxxx"; // Local TFS testing instance default collection //tfsUri = @"xxxxx"; TfsTeamProjectCollection tfs = new TfsTeamProjectCollection(new System.Uri(tfsUri)); tfs.EnsureAuthenticated(); return tfs; } // HELPERS public string Name { get { return "Bug Auto Task Creation Event Handler"; } } public SubscriberPriority Priority { get { return SubscriberPriority.Normal; } } public enum MsgLevel { Info, Warning, Error }; public Type[] SubscribedTypes() { return new Type[1] { typeof(WorkItemChangedEvent) }; } } } Logger.cs using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Windows.Forms; namespace BugAutoTaskCreation { class Logger { // fields private string _ApplicationDirectory = @"C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs"; private string _LogFileName = @"\CFG_ACCT_AT_OWS_BugAutoTaskCreation.log"; private string _LogFile; private string _LogTimestamp = DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss"); private string _MsgLevelText = string.Empty; // default constructor public Logger() { // check for a prior log file FileInfo logFile = new FileInfo(_ApplicationDirectory + _LogFileName); if (!logFile.Exists) { CreateNewLogFile(ref logFile); } } // properties public string ApplicationDirectory { get { return _ApplicationDirectory; } set { _ApplicationDirectory = value; } } public string LogFile { get { _LogFile = _ApplicationDirectory + _LogFileName; return _LogFile; } set { _LogFile = value; } } // PUBLIC METHODS public void WriteLineToLog(BugAutoTask.MsgLevel msgLevel, string logRecord) { try { // set msgLevel text if (msgLevel == BugAutoTask.MsgLevel.Info) { _MsgLevelText = "[Info @" + MsgTimeStamp() + "] "; } else if (msgLevel == BugAutoTask.MsgLevel.Warning) { _MsgLevelText = "[Warning @" + MsgTimeStamp() + "] "; } else if (msgLevel == BugAutoTask.MsgLevel.Error) { _MsgLevelText = "[Error @" + MsgTimeStamp() + "] "; } else { _MsgLevelText = "[Error: unsupported message level @" + MsgTimeStamp() + "] "; } // write a line to the log file StreamWriter logFile = new StreamWriter(_ApplicationDirectory + _LogFileName, true); logFile.WriteLine(_MsgLevelText + logRecord); logFile.Close(); } catch (Exception) { throw; } } // PRIVATE METHODS private void CreateNewLogFile(ref FileInfo logFile) { try { string logFilePath = logFile.FullName; // write the log file header _MsgLevelText = "[Info @" + MsgTimeStamp() + "] "; string cpu = string.Empty; if (Environment.Is64BitOperatingSystem) { cpu = " (x64)"; } StreamWriter newLog = new StreamWriter(logFilePath, false); newLog.Flush(); newLog.WriteLine(_MsgLevelText + "===================================================================="); newLog.WriteLine(_MsgLevelText + "Team Foundation Server Administration Log"); newLog.WriteLine(_MsgLevelText + "Version : " + "1.0.0 Author: Bob Hardister"); newLog.WriteLine(_MsgLevelText + "DateTime : " + _LogTimestamp); newLog.WriteLine(_MsgLevelText + "Type : " + "OWS Custom TFS API Plug-in"); newLog.WriteLine(_MsgLevelText + "Activity : " + "Bug Auto Task Creation for CCB Approved Bugs"); newLog.WriteLine(_MsgLevelText + "Area : " + "Build Explorer"); newLog.WriteLine(_MsgLevelText + "Assembly : " + "Ows.TeamFoundation.BugAutoTaskCreation.PlugIns.dll"); newLog.WriteLine(_MsgLevelText + "Location : " + @"C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\bin\Plugins"); newLog.WriteLine(_MsgLevelText + "User : " + Environment.UserDomainName + @"\" + Environment.UserName); newLog.WriteLine(_MsgLevelText + "Machine : " + Environment.MachineName); newLog.WriteLine(_MsgLevelText + "System : " + Environment.OSVersion + cpu); newLog.WriteLine(_MsgLevelText + "===================================================================="); newLog.WriteLine(_MsgLevelText); newLog.Close(); } catch (Exception) { throw; } } private string MsgTimeStamp() { string msgTimestamp = string.Empty; return msgTimestamp = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss:fff"); } } }

    Read the article

  • Postfix sendmail -f configuration

    - by William
    I have Postfix installed on two servers. One of them writes e-mail (satellite) and the other one delivers the e-mails (smarthost). When I write e-mail from the satellite server I'm using the sendmail command. My problem is that when e-mail arrives the Return-Path is set to the user@hostname where user is the user that is running sendmail and hostname is the servers hostname. If I use the parameter -f with sendmail I change that, but I'm hoping there is a way to do it in a configuration file for Postfix. Is this possible or should I just deal with having to configure all my software to add the -f argument? Thanks in advanced.

    Read the article

  • ssh login successful, but scp password gives me "Permission denied"

    - by YANewb
    I'm trying to get some blogging software up on an organizational remote server. I tried to set up a SSH Key but was having problems and decided that getting the blog up and running was more important than dealing with the SSH Key issue, so I ssh-keygen -R remoteserver.com. Now I can successfully login with ssh -v [email protected] and the correct password. Once logged in I can move around and read any file and directory that I should be able to read. But when I try to edit an existing -rw-r--r-- file with VIM, it shows up as read-only, if I try to edit permissions I get chmod: file.ext: Operation not permitted, and if I try to scp a new file from my local machine I'm prompted for the remote user's password, and then get scp: /home/path/to/file.ext: Permission denied. Since I didn't have any of these problems before I tried to set up the ssh key, I suspect these anomalies are a side effect of that, but I don't know how to troubleshoot this. So what does a foolish server-newb, such as myself, need to do to get edit capability back as a remote user? Addendum 1: My userids are different between my local machine and the remote server. For ssh I ssh -v [email protected]. if I whoami I get remoteuser For scp I scp file.ext [email protected]:/path/to/file.ext from the local directory with file.ext while logged in as the local user. if I whoami I get localuser The ls -l for two different files I've tried scp: -rw-r--r--@ 1 localuser localgroup 20 Feb 11 21:03 phpinfo.php -rw-r--r-- 1 root localgroup 4 Feb 11 22:32 test.txt The ls -l for the file I've tried to VIM: -rw-r--r-- 1 remoteuser remotegroup 76 Jul 27 2009 info.txt Addendum 2: In the past I've set up ssh-keys for git repositories. I don't want to completely destroy them, so in an attempt to follow a deer's train of thinking I renamed my ~/.ssh/ to ~/.ssh-bak/, then tested the different types of access. The abridged version of the terminal commands and results is below; I think everything is working until the 8th line from the end. localcomputer:~ localuser$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY The authenticity of host 'remoteserver.com (###.###.###.###)' can't be established. RSA key fingerprint is ##:##:##:##:##:##:##:##:##:##:##:##:##:##:##:##. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'remoteserver.com,###.###.###.###' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Last login: Sun Feb 12 18:00:54 2012 from 68.69.164.123 FreeBSD 6.4-RELEASE-p8 (VKERN) #1 r101746: Mon Aug 30 10:34:40 MDT 2010 [remoteuser@remoteserver /home]$ ls -l total ### -rw-r--r-- 1 remoteuser remotegroup 76 Aug 12 2009 info.txt [remoteuser@remoteserver /home]$ vim info.txt ~ {at the bottom of the VIM screen it tells me it's [read only]} [remoteuser@remoteserver /home]$ whoami remoteuser [remoteuser@remoteserver /home]$ logout debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to remoteserver.com closed. Transferred: sent 3872, received 12496 bytes, in 107.4 seconds Bytes per second: sent 36.1, received 116.4 debug1: Exit status 0 localcomputer:localdirectory name$ scp -v phpinfo.php [email protected]:/home/www/remotedirectory/phpinfo.php Executing: program /usr/bin/ssh host remoteserver.com, user remoteuser, command scp -v -t /home/www/remotedirectory/phpinfo.php OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to remoteserver.com [###.###.###.###] port 22. debug1: Connection established. debug1: identity file /Users/localuser/.ssh/identity type -1 debug1: identity file /Users/localuser/.ssh/id_rsa type -1 debug1: identity file /Users/localuser/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p2 FreeBSD-20110503 debug1: match: OpenSSH_5.8p2 FreeBSD-20110503 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'remoteserver.com' is known and matches the RSA host key. debug1: Found key in /Users/localuser/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/localuser/.ssh/identity debug1: Trying private key: /Users/localuser/.ssh/id_rsa debug1: Trying private key: /Users/localuser/.ssh/id_dsa debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending command: scp -v -t /home/www/remotedirectory/phpinfo.php Sending file modes: C0644 20 phpinfo.php Sink: C0644 20 phpinfo.php scp: /home/www/remotedirectory/phpinfo.php: Permission denied debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: fd 1 clearing O_NONBLOCK Transferred: sent 1456, received 2160 bytes, in 0.6 seconds Bytes per second: sent 2322.3, received 3445.1 debug1: Exit status 1

    Read the article

  • Steam on 64-bit 14.04: need some help, missing a few 32-bit libs

    - by YellowShark
    Steam says I'm missing the following libs, I'm hoping someone can help me get things in better shape: xyz@abc:~$ STEAM_RUNTIME=0 steam Running Steam on ubuntu 14.04 64-bit STEAM_RUNTIME is disabled by the user Error: You are missing the following 32-bit libraries, and Steam may not run: libpangoft2-1.0.so.0 libpango-1.0.so.0 libgtk-x11-2.0.so.0 libgdk_pixbuf-2.0.so.0 Installing breakpad exception handler for appid(steam)/version(1401381906_client) Installing breakpad exception handler for appid(steam)/version(1401381906_client) [2014-06-11 20:45:39] Startup - updater built May 29 2014 09:19:23 [2014-06-11 20:45:39] Verifying installation... [2014-06-11 20:45:39] Verification complete [2014-06-11 20:45:42] Shutdown I tried installing the following i386 packages: libpango-1.0-0:i386, libpangoft2-1.0-0:i386, and libgdk-pixbuf2.0-0:i386, and symlinking the .so files (from usr/lib/i386whatever../) into the ~/.local/share/Steam/ubuntu12_32/ folder, but wasn't able to find the right match for the gtk-x11 lib, and ultimately would up with a different, but still non-working situation. So I've back-tracked to this point, and have removed those i386 packages for now. It's worth noting thatSteam runs if I don't use STEAM_RUNTIME=0. Also, Steam seemed to "recognize" the i386 version of the libpango & libpangoft2 libs after I symlinked them into place, during the course of my troubleshooting; when I would rerun STEAM_RUNTIME=0 steam, it wouldn't list those two items as missing anymore. Instead though, I had a bunch of gtk-related issues, something about overlay-scrollbar not available, as well as warnings that it can't find the murrine engine... a whole bunch of stuff that sounded like I'd gone too far down the wrong path. Anyhow, any help sorting this out would be appreciated, and thanks!

    Read the article

  • How can I fix puppet refusing to start and asking for "master.pp"?

    - by cwd
    I'm using the very latest version of puppet and have been following the Apress "Pro Puppet" guide step by step. I have installed puppet sudo aptitude install ruby libshadow-ruby1.8 sudo aptitude install puppet puppetmaster facter I have edited /etc/puppet/puppet.conf to include certname [master] certname=puppet.mydomain.com I have edited /etc/hosts and added the following line 127.0.0.1 puppet.mydomain.com puppet I have set the hostname of the server echo "puppet.mydomain.com" > /etc/hostname hostname -F /etc/hostname And then I try and run puppet from the command line. puppet master --verbose --no-daemonize And puppet gives me this error: Could not parse for environment production: Could not find file /master.pp I'm running all commands with sudo and the last line of the error message always says that it can't find master.pp and the path before it is to my current working directory. What am I doing wrong? I should also mention that I don't have a DNS record set up for puppet.mydomain.com - I saw some online documentation mentioning this might be a problem - however I was fairly sure that the hosts file would let me get around that.

    Read the article

  • How do I access shared folders on Ubuntu server from Mac OS?

    - by Stephen
    I have an old dell desktop running ubuntu 11.04, I also have samba installed on it. I'm trying to access the shared folders on the Ubuntu machine from my Mac, so I go into 'Finder', click on 'Go' and 'Connect to Server'. I type in the ip address of the ubuntu machine smb://xxx.xxx.x.xx and click connect, I can then see the list of shared folders from the ubuntu machine so I know its making a connection. But when I access the 'Music' folder I get an error message stating: There was an error connecting to the server "xxx.xxx.x.xx". Check the server name or IP address, and try again. Any thoughts anyone ? EDIT I have a external hard drive attached to the server, and the folders I'm trying to access are located on that external hard drive. The location of the folder is /media/HD-CELU2/test, so I think the path from Finder should be smb://xxx.xxx.x.xx/media/HD-CELU2/test, but having tested this, I'm still not getting in. P.S. I'm using Samba as I have a Windows machine on my home network as well.

    Read the article

  • What is Ubuntu's Definition of a "Registered Application"?

    - by Tom
    I've run into this a few times when installing apps from source, and during the occasional hack with update-alternatives. So far, it's only been a minor annoyance (ie, not got in the way of the end-goal) but it's now a frustration as it's pointing to a hole in my knowledge-base... so when I get a message that 'foo' is "not a registered application" (or I can't use foo's default icon cuz Ubuntu has no knowledge of 'foo'): (1) what defines a "registered application"? (2) how can I define an application installed from source (and likely residing in $HOME/bin/app-name) such that it packs the same functionality as a package installed from a .deb? (if the solution is not self-evident from answer 1) Example: I download and unpack daily dev builds of sublime-text-2 to /home/tom/bin/sublime-text-2. I've created a *.desktop file with appropriate shortcuts, etc. But the icon for sublime cannot be display in any launcher even if I provide a full pathname to the option. The solution is to install a 2nd instance of sublime from a deb package. When I install sublime-text-2 from a .deb package, it installs under /usr/bin && /usr/lib, the installed .desktop file is stored under /usr/share/applications, and the relevant line reads: icon=sublime_text. Where's the linkage I'm missing? Somehow Ubuntu knows how to exact the icon from sublime_text in the latter, but not in the former (again, even with a full path provided).

    Read the article

  • WP E Commerce Safe Mode restriction error [on hold]

    - by Mustafa Kamal
    I have my online shop, created with WP Ecommerce getting broken after I moved it to another server. I could be sure that the problem comes from WP Ecommerce because when I disable that plugin. Everything run as expected. This is the exact error message Warning: session_start() [function.session-start]: SAFE MODE Restriction in effect. The script whose uid is 515 is not allowed to access /tmp owned by uid 0 in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 Fatal error: session_start() [<a href='function.session-start'>function.session-start</a>]: Failed to initialize storage module: files (path: ) in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 I've tried to turn off safe mode on my php configuration. nothing happens. the error's still there. I thought it was some kind of permission issue, so I tried to change /tmp permission to 777. Nothing happens. I googled it some more and suspect it might have something to do with fastCGI configuration and stuff. Which I totally don't understand. My googling result mostly suggest me to consult the web hosting provider or even to move to another host. But in this case, I am the owner of the server (VPS with cPanel/WHM). And I don't have any idea how to solve this kind of problem.

    Read the article

  • My PowerShell functions do not appear to be used

    - by Frank
    Hi there, I have a ps1 script in which I define 2 functions as such: function Invoke-Sql([string]$query) { Invoke-Sqlcmd -ServerInstance $Server -Database $DB -User $User -Password $Password -Query $query } function Get-Queued { Invoke-Sql "Select * From Comment where AwaitsModeration = 1" } I then call the ps1 file by typing it in (it's in a folder in the path, and autocompletion works) However, I cannot start using the functions. I am confused, because when I copy / paste the functions into the console, all is fine and they work. I also have a function defined in my profile, and it works. Where am I thinking wrong, why doesn't it work what I'm trying to do?

    Read the article

  • Windows 7, file properties, date modified, how do you show seconds?

    - by Jordan Weinstein
    Anyone know a way to immediately show the seconds of a file's date modified property in the GUI? So if you create a file, any file in any directory, right-click and choose Properties, the date modified (if it's recent) will say something like "dd/mm/yyy hh:mm, one minute ago" - reminder this is in Windows 7. Windows XP did it normally. Then they changed something. If you wait a while, eventually you'll see the seconds, I'm not sure how long a while is, but this is incredibly annoying if you want to troubleshoot something that relies on the seconds of timestamps... is there a setting? registry key I can change perhaps? I'm literally using Chrome, pasting in the path of the directory to be able to see the seconds quickly (as a workaround) but would be nice to be able to use Win7.

    Read the article

  • Server being used to send spam mail. How do I investigate?

    - by split_account
    Problem I think my server is being used to send spam with sendmail, I'm getting a lot of mail being queued up that I don't recognize and my mail.log and syslog are getting huge. I've shutdown sendmail, so none of it is getting out but I can't work out where it's coming from. Investigation so far: I've tried the solution in the blog post below and also shown in this thread. It's meant to add a header from wherever the mail is being added and log all all mail to file, so I changed the following lines in my php.ini file: mail.add_x_header = On mail.log = /var/log/phpmail.log But nothing is appearing in the phpmail.log. I used the command here to investigate cron jobs for all users, but nothing is out of place. The only cron being run is the cron for the website. And then I brought up all php files which had been modified in the last 30 days but none of them look suspicious. What else can I do to find where this is coming from? Mail.log reports Turned sendmail back on for second. Here is a small sample of the reports: Jun 10 14:40:30 ubuntu12 sm-mta[13684]: s5ADeQdp013684: from=<>, size=2431, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Jun 10 14:40:30 ubuntu12 sm-msp-queue[13674]: s5ACK1cC011438: to=www-data, delay=01:20:14, xdelay=00:00:00, mailer=relay, pri=571670, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s5ADeQdp013684 Message accepted for delivery) Jun 10 14:40:30 ubuntu12 sm-mta[13719]: s5ADeQdp013684: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=32683, dsn=2.0.0, stat=Sent Jun 10 14:40:30 ubuntu12 sm-mta[13684]: s5ADeQdr013684: from=<[email protected]>, size=677, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Jun 10 14:40:31 ubuntu12 sm-msp-queue[13674]: s5AC0gpi011125: to=www-data, ctladdr=www-data (33/33), delay=01:39:49, xdelay=00:00:01, mailer=relay, pri=660349, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s5ADeQdr013684 Message accepted for delivery) Jun 10 14:40:31 ubuntu12 sm-mta[13721]: s5ADeQdr013684: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:00:01, xdelay=00:00:00, mailer=local, pri=30946, dsn=2.0.0, stat=Sent Jun 10 14:40:31 ubuntu12 sm-mta[13684]: s5ADeQdt013684: from=<[email protected]>, size=677, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Jun 10 14:40:31 ubuntu12 sm-msp-queue[13674]: s5ACF2Nq011240: to=www-data, ctladdr=www-data (33/33), delay=01:25:29, xdelay=00:00:00, mailer=relay, pri=660349, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s5ADeQdt013684 Message accepted for delivery) Jun 10 14:40:31 ubuntu12 sm-mta[13723]: s5ADeQdt013684: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:00:00, xdelay=00:00:00, mailer=local, pri=30946, dsn=2.0.0, stat=Sent Ju Further Investigation Spotted 4 spam accounts registered in the past day, which is suspicious however all have normal user privileges. There are no contact forms on the site, there are a number of forms and they take either filtered text input or plain text input. Mail is still being queued up having switched the website to maintenance mode, which blocks out everyone but the admin. Ok more investigation, it looks like the email is being send by my websites cron which runs every 5 minutes. However there are no cron jobs I've set-up which run more than once an hour and show on the website log so presumably someone has managed to edit my cron somehow. Copy of email: V8 T1402410301 K1402411201 N2 P120349 I253/1/369045 MDeferred: Connection refused by [127.0.0.1] Fbs $_www-data@localhost ${daemon_flags}c u Swww-data [email protected] MDeferred: Connection refused by [127.0.0.1] C:www-data rRFC822; [email protected] RPFD:www-data H?P?Return-Path: <?g> H??Received: (from www-data@localhost) by ubuntu12.pcsmarthosting.co.uk (8.14.4/8.14.4/Submit) id s5AEP13T015507 for www-data; Tue, 10 Jun 2014 15:25:01 +0100 H?D?Date: Tue, 10 Jun 2014 15:25:01 +0100 H?x?Full-Name: CronDaemon H?M?Message-Id: <[email protected]> H??From: root (Cron Daemon) H??To: www-data H??Subject: Cron <www-data@ubuntu12> /usr/bin/drush @main elysia-cron H??Content-Type: text/plain; charset=ANSI_X3.4-1968 H??X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin> H??X-Cron-Env: <COLUMNS=80> H??X-Cron-Env: <SHELL=/bin/sh> H??X-Cron-Env: <HOME=/var/www> H??X-Cron-Env: <LOGNAME=www-data>

    Read the article

  • ArchBeat Link-o-Rama for 11/18/2011

    - by Bob Rhubart
    IT executives taking lead role with both private and public cloud projects: survey | Joe McKendrick "The survey, conducted among members of the Independent Oracle Users Group, found that both private and public cloud adoption are up—30% of respondents report having limited-to-large-scale private clouds, up from 24% only a year ago. Another 25% are either piloting or considering private cloud projects. Public cloud services are also being adopted for their enterprises by more than one out of five respondents." - Joe McKendrick SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. OIM 11g OID (LDAP) Groups Request-Based Provisioning with custom approval – Part I | Alex Lopez Iin part one of a two-part blog post, Alex Lopez illustrates "an implementation of a Custom Approval process and a Custom UI based on ADF to request entitlements for users which in turn will be converted to Group memberships in OID." ArchBeat Podcast Information Integration - Part 3/3 "Oracle Information Integration, Migration, and Consolidation" author Jason Williamson, co-author Tom Laszeski, and book contributor Marc Hebert talk about upcoming projects and about what they've learned in writing their book. InfoQ: Enterprise Shared Services and the Cloud | Ganesh Prasad As an industry, we have converged onto a standard three-layered service model (IaaS, PaaS, SaaS) to describe cloud computing, with each layer defined in terms of the operational control capabilities it offers. This is unlike enterprise shared services, which have unique characteristics around ownership, funding and operations, and they span SaaS and PaaS layers. Ganesh Prasad explores the differences. Stress Testing Oracle ADF BC Applications - Do Connection Pooling and TXN Disconnect Level Oracle ACE Director Andrejus Baranovskis describes "how jbo.doconnectionpooling = true and jbo.txn.disconnect_level = 1 properties affect ADF application performance." Exploring TCP throughput with DTrace | Alan Maguire "According to the theory," says Maguire, "when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput."

    Read the article

  • Context Sensitive JTable

    - by Geertjan
    Here's a plain old JTable on the NetBeans Platform. Whenever the toolbar button is clicked, information about the currently selected row is displayed in the status bar: Normally, the above would be achieved in NetBeans Platform applications via Nodes publishing their underlying business object when the selection changes. In this case, there are no Nodes at all. There's only a JTable and a DefaultTableModel, i.e., all pure Java Swing. So, how does it work? To follow the logic, it makes sense to create the example yourself, starting with the Stock object: public class Stock {     String name;     String desc;     public Stock() {     }     public Stock(String name, String desc) {         this.name = name;         this.desc = desc;     }     public String getDesc() {         return desc;     }     public String getName() {         return name;     }     public void setDesc(String desc) {         this.desc = desc;     }     public void setName(String name) {         this.name = name;     } } Next, create a new Window Component via the wizard and then rewrite the constructor as follows: public final class MyWindowTopComponent extends TopComponent {     private final InstanceContent ic = new InstanceContent();     public MyWindowTopComponent() {         initComponents();         //Statically create a few stocks,         //in reality these would come from a data source         //of some kind:         List<Stock> list = new ArrayList();         list.add(new Stock("AMZN", "Amazon"));         list.add(new Stock("BOUT", "About.com"));         list.add(new Stock("Something", "Something.com"));         //Create a JTable, passing the List above         //to a DefaultTableModel:         final JTable table = new JTable(StockTableModel (list));         //Whenever the mouse is clicked on the table,         //somehow construct a new Stock object //(or get it from the List above) and publish it:         table.addMouseListener(new MouseAdapter() {             @Override             public void mousePressed(MouseEvent e) {                 int selectedColumn = table.getSelectedColumn();                 int selectedRow = table.getSelectedRow();                 Stock s = new Stock();                 if (selectedColumn == 0) {                     s.setName(table.getModel().getValueAt(selectedRow, 0).toString());                     s.setDesc(table.getModel().getValueAt(selectedRow, 1).toString());                 } else {                     s.setName(table.getModel().getValueAt(selectedRow, 1).toString());                     s.setDesc(table.getModel().getValueAt(selectedRow, 0).toString());                 }                 ic.set(Collections.singleton(s), null);             }         });         JScrollPane scrollPane = new JScrollPane(table);         add(scrollPane, BorderLayout.CENTER);         //Put the dynamic InstanceContent into the Lookup:         associateLookup(new AbstractLookup(ic));     }     private DefaultTableModel StockTableModel (List<Stock> stockList) {         DefaultTableModel stockTableModel = new DefaultTableModel() {             @Override             public boolean isCellEditable(int row, int column) {                 return false;             }         };         Object[] columnNames = new Object[2];         columnNames[0] = "Symbol";         columnNames[1] = "Name";         stockTableModel.setColumnIdentifiers(columnNames);         Object[] rows = new Object[2];         ListIterator<Stock> stockListIterator = stockList.listIterator();         while (stockListIterator.hasNext()) {             Stock nextStock = stockListIterator.next();             rows[0] = nextStock.getName();             rows[1] = nextStock.getDesc();             stockTableModel.addRow(rows);         }         return stockTableModel;     }     ...     ...     ... And now, since you're publishing a new Stock object whenever the user clicks in the table, you can create loosely coupled Actions, like this: @ActionID(category = "Edit", id = "org.my.ui.ShowStockAction") @ActionRegistration(iconBase = "org/my/ui/Datasource.gif", displayName = "#CTL_ShowStockAction") @ActionReferences({     @ActionReference(path = "Menu/File", position = 1300),     @ActionReference(path = "Toolbars/File", position = 300) }) @Messages("CTL_ShowStockAction=Show Stock") public final class ShowStockAction implements ActionListener {     private final Stock context;     public ShowStockAction(Stock context) {         this.context = context;     }     @Override     public void actionPerformed(ActionEvent ev) {         StatusDisplayer.getDefault().setStatusText(context.getName() + " / " + context.getDesc());     } }

    Read the article

  • Can't run telnet in console opened by Autohotkey

    - by Steve Crane
    I have enabled the telnet client on my Windows 7 64-bit machine and if I open the start menu and launch cmd from there I can run telnet. I normally use the keyboard shortcut Win-C, implemented by this AutoHotkey snippet to open a console. #c::Run, C:\WINDOWS\system32\cmd.exe For some strange reason when I try to run telnet in a console window opened this way I get Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\Steve\Documentstelnet 'telnet' is not recognized as an internal or external command, operable program or batch file. Running path in any console, regardless of how it was opened produces the same output. Can anyone shed any light on why telnet might run in one console but not the other?

    Read the article

  • how do I make my Dualshock 3 gamepad work in Ubuntu 14.04?

    - by user290527
    When my desktop computer was running Ubuntu 12.04, my PS3 controllers would work with USB. I didn't need to do any special setup. I could just plug it in before I start SuperTuxKart and it would recognize it. I can also do this on my laptop (still running 12.04). Since I gave my desktop a fresh install of Ubuntu 14.04, the controller would never work. I have played with some installed software that I found when looking for information. Here is what I get with xboxdrv: liam@Liam-CustomDesktop:~$ sudo xboxdrv --detach-kernel-driver xboxdrv 0.8.5 - http://pingus.seul.org/~grumbel/xboxdrv/ Copyright © 2008-2011 Ingo Ruhnke <[email protected]> Licensed under GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; see the file COPYING for details. Controller: PLAYSTATION(R)3 Controller Vendor/Product: 054c:0268 USB Path: 003:012 Controller Type: Playstation 3 USB Your Xbox/Xbox360 controller should now be available as: /dev/input/js0 /dev/input/event16 Press Ctrl-c to quit, use '--silent' to suppress the event output So the existence of this controller is acknowledged at on my computer. But it never works for input. In my old installation, I didn't even need to get software like xboxdrv for it to work. I have never tried bluetooth on either computer, but I don't think I even have that on my desktop. So now, how do I make my gamepad work in Ubuntu 14.04?

    Read the article

  • What's the best external hard drive configuration for a software developer laptop?

    - by Dan
    I've got a Dell laptop that I use as a software developer box at work and find that the drive is usually the bottleneck. I'd like to hook up two 10k RPM drives that are striped for performance. I've looked for drive cases with RAID but there don't seem to be very many choices and I'm worried about compatibility with the drives (preferably SATA 2). Also I don't have a SATA connection on my machine so it'll have to USB 2.0 for now. Am I headed down the right path or am I missing a much simpler configuration?

    Read the article

  • NodeJS Supervisord Hashlib

    - by enedebe
    I have an problem with my NodeJS app. The problem is the include of the library Hashlib I've followed more than 10 times the instructions to install. Get a clone of the repo, do make and make install. NodeJS is installed in default path, and that's the tricky point: When I launch node app.js it works, perfectly. The problem starts when I configured my Supervisord to run with the same user, with the same config file as I have in other systems working, and I get that NodeJS can't find hashlib. module.js:337 throw new Error("Cannot find module '" + request + "'"); ^ Error: Cannot find module 'hashlib' I'm getting crazy, what can I do?! Why my user launching node from the console works great, but not the supervisord? Thanks!

    Read the article

  • JkWorkersFile: Can't find the workers file specified

    - by Vasan
    I am trying to set up a simple horizonatal Tomcat clustering in windows XP. Have created a workers.properties file in conf/ directory next to httpd.conf file. However, when trying to start apache using httpd.exe, I am getting the below error. JkWorkersFile: Can't find the workers file specified httpd.conf has below entry: LoadModule jk_module modules/mod_jk.so JkLogFile "logs/mod_jk.log" JkLogLevel error JkMount /TestProject loadbalancer JkMount /TestProject/* loadbalancer JkWorkersFile conf/workers.properties I tried specifying the absoluate path as well i.e. JkWorkersFile "C:/Program Files/Apache Software Foundation/Apache2.2/conf/workers.properties" But still ended up with the same problem. Below are the entries from workers.properties workers.tomcat_home=$TOMCAT_HOME workers.java_home=$JAVA_HOME ps=/ worker.list=tomcatA,tomcatB,tomcatC,loadbalancer worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1 worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.tomcatC.port=8309 worker.tomcatC.host=localhost worker.tomcatC.type=ajp13 worker.tomcatC.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB,tomcatC worker.loadbalancer.sticky_session=1 Can anyone help me to resolve this please ?

    Read the article

  • Stuck at enemy movement

    - by Syed
    I am making a TD game in unity, Initially I made all of my enemy movements frame rate dependent say: I had a grid point1 at -22.65 and other at -21.1, diagrammatically: (-22.65) _________(-21.1)_______(-21.1+1.55) ...... so the distance on x axis between two points is 1.55, divided it by 25 jumps with each enemy jump of 0.062 of each frame. On reaching on next point of grid the enemy find again its path. All went fine until I have requirement of FastForward and Pause feature. I used timeScale property of unity but it wont work as they are frame dependent. I also tried double speed of enemy on clicking fast forward button at any time, it has some issues that enemy jumps are now less and it fails to reach on next grid point. Could someone suggest me solution to my problem. Do I need to change the enemy movement code to make it frame independent ?? I need the enemy to reach on the grid specific point I also need later to slow down any one enemy's speed when tower fires on it. Thnx

    Read the article

  • C#/.NET Little Wonders: Constraining Generics with Where Clause

    - by James Michael Hare
    Back when I was primarily a C++ developer, I loved C++ templates.  The power of writing very reusable generic classes brought the art of programming to a brand new level.  Unfortunately, when .NET 1.0 came about, they didn’t have a template equivalent.  With .NET 2.0 however, we finally got generics, which once again let us spread our wings and program more generically in the world of .NET However, C# generics behave in some ways very differently from their C++ template cousins.  There is a handy clause, however, that helps you navigate these waters to make your generics more powerful. The Problem – C# Assumes Lowest Common Denominator In C++, you can create a template and do nearly anything syntactically possible on the template parameter, and C++ will not check if the method/fields/operations invoked are valid until you declare a realization of the type.  Let me illustrate with a C++ example: 1: // compiles fine, C++ makes no assumptions as to T 2: template <typename T> 3: class ReverseComparer 4: { 5: public: 6: int Compare(const T& lhs, const T& rhs) 7: { 8: return rhs.CompareTo(lhs); 9: } 10: }; Notice that we are invoking a method CompareTo() off of template type T.  Because we don’t know at this point what type T is, C++ makes no assumptions and there are no errors. C++ tends to take the path of not checking the template type usage until the method is actually invoked with a specific type, which differs from the behavior of C#: 1: // this will NOT compile! C# assumes lowest common denominator. 2: public class ReverseComparer<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } So why does C# give us a compiler error even when we don’t yet know what type T is?  This is because C# took a different path in how they made generics.  Unless you specify otherwise, for the purposes of the code inside the generic method, T is basically treated like an object (notice I didn’t say T is an object). That means that any operations, fields, methods, properties, etc that you attempt to use of type T must be available at the lowest common denominator type: object.  Now, while object has the broadest applicability, it also has the fewest specific.  So how do we allow our generic type placeholder to do things more than just what object can do? Solution: Constraint the Type With Where Clause So how do we get around this in C#?  The answer is to constrain the generic type placeholder with the where clause.  Basically, the where clause allows you to specify additional constraints on what the actual type used to fill the generic type placeholder must support. You might think that narrowing the scope of a generic means a weaker generic.  In reality, though it limits the number of types that can be used with the generic, it also gives the generic more power to deal with those types.  In effect these constraints says that if the type meets the given constraint, you can perform the activities that pertain to that constraint with the generic placeholders. Constraining Generic Type to Interface or Superclass One of the handiest where clause constraints is the ability to specify the type generic type must implement a certain interface or be inherited from a certain base class. For example, you can’t call CompareTo() in our first C# generic without constraints, but if we constrain T to IComparable<T>, we can: 1: public class ReverseComparer<T> 2: where T : IComparable<T> 3: { 4: public int Compare(T lhs, T rhs) 5: { 6: return lhs.CompareTo(rhs); 7: } 8: } Now that we’ve constrained T to an implementation of IComparable<T>, this means that our variables of generic type T may now call any members specified in IComparable<T> as well.  This means that the call to CompareTo() is now legal. If you constrain your type, also, you will get compiler warnings if you attempt to use a type that doesn’t meet the constraint.  This is much better than the syntax error you would get within C++ template code itself when you used a type not supported by a C++ template. Constraining Generic Type to Only Reference Types Sometimes, you want to assign an instance of a generic type to null, but you can’t do this without constraints, because you have no guarantee that the type used to realize the generic is not a value type, where null is meaningless. Well, we can fix this by specifying the class constraint in the where clause.  By declaring that a generic type must be a class, we are saying that it is a reference type, and this allows us to assign null to instances of that type: 1: public static class ObjectExtensions 2: { 3: public static TOut Maybe<TIn, TOut>(this TIn value, Func<TIn, TOut> accessor) 4: where TOut : class 5: where TIn : class 6: { 7: return (value != null) ? accessor(value) : null; 8: } 9: } In the example above, we want to be able to access a property off of a reference, and if that reference is null, pass the null on down the line.  To do this, both the input type and the output type must be reference types (yes, nullable value types could also be considered applicable at a logical level, but there’s not a direct constraint for those). Constraining Generic Type to only Value Types Similarly to constraining a generic type to be a reference type, you can also constrain a generic type to be a value type.  To do this you use the struct constraint which specifies that the generic type must be a value type (primitive, struct, enum, etc). Consider the following method, that will convert anything that is IConvertible (int, double, string, etc) to the value type you specify, or null if the instance is null. 1: public static T? ConvertToNullable<T>(IConvertible value) 2: where T : struct 3: { 4: T? result = null; 5:  6: if (value != null) 7: { 8: result = (T)Convert.ChangeType(value, typeof(T)); 9: } 10:  11: return result; 12: } Because T was constrained to be a value type, we can use T? (System.Nullable<T>) where we could not do this if T was a reference type. Constraining Generic Type to Require Default Constructor You can also constrain a type to require existence of a default constructor.  Because by default C# doesn’t know what constructors a generic type placeholder does or does not have available, it can’t typically allow you to call one.  That said, if you give it the new() constraint, it will mean that the type used to realize the generic type must have a default (no argument) constructor. Let’s assume you have a generic adapter class that, given some mappings, will adapt an item from type TFrom to type TTo.  Because it must create a new instance of type TTo in the process, we need to specify that TTo has a default constructor: 1: // Given a set of Action<TFrom,TTo> mappings will map TFrom to TTo 2: public class Adapter<TFrom, TTo> : IEnumerable<Action<TFrom, TTo>> 3: where TTo : class, new() 4: { 5: // The list of translations from TFrom to TTo 6: public List<Action<TFrom, TTo>> Translations { get; private set; } 7:  8: // Construct with empty translation and reverse translation sets. 9: public Adapter() 10: { 11: // did this instead of auto-properties to allow simple use of initializers 12: Translations = new List<Action<TFrom, TTo>>(); 13: } 14:  15: // Add a translator to the collection, useful for initializer list 16: public void Add(Action<TFrom, TTo> translation) 17: { 18: Translations.Add(translation); 19: } 20:  21: // Add a translator that first checks a predicate to determine if the translation 22: // should be performed, then translates if the predicate returns true 23: public void Add(Predicate<TFrom> conditional, Action<TFrom, TTo> translation) 24: { 25: Translations.Add((from, to) => 26: { 27: if (conditional(from)) 28: { 29: translation(from, to); 30: } 31: }); 32: } 33:  34: // Translates an object forward from TFrom object to TTo object. 35: public TTo Adapt(TFrom sourceObject) 36: { 37: var resultObject = new TTo(); 38:  39: // Process each translation 40: Translations.ForEach(t => t(sourceObject, resultObject)); 41:  42: return resultObject; 43: } 44:  45: // Returns an enumerator that iterates through the collection. 46: public IEnumerator<Action<TFrom, TTo>> GetEnumerator() 47: { 48: return Translations.GetEnumerator(); 49: } 50:  51: // Returns an enumerator that iterates through a collection. 52: IEnumerator IEnumerable.GetEnumerator() 53: { 54: return GetEnumerator(); 55: } 56: } Notice, however, you can’t specify any other constructor, you can only specify that the type has a default (no argument) constructor. Summary The where clause is an excellent tool that gives your .NET generics even more power to perform tasks higher than just the base "object level" behavior.  There are a few things you cannot specify with constraints (currently) though: Cannot specify the generic type must be an enum. Cannot specify the generic type must have a certain property or method without specifying a base class or interface – that is, you can’t say that the generic must have a Start() method. Cannot specify that the generic type allows arithmetic operations. Cannot specify that the generic type requires a specific non-default constructor. In addition, you cannot overload a template definition with different, opposing constraints.  For example you can’t define a Adapter<T> where T : struct and Adapter<T> where T : class.  Hopefully, in the future we will get some of these things to make the where clause even more useful, but until then what we have is extremely valuable in making our generics more user friendly and more powerful!   Technorati Tags: C#,.NET,Little Wonders,BlackRabbitCoder,where,generics

    Read the article

< Previous Page | 512 513 514 515 516 517 518 519 520 521 522 523  | Next Page >