Search Results

Search found 10071 results on 403 pages for 'remote branch'.

Page 132/403 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Git: Is there a way to figure out where a commit was cherry-pick'ed from?

    - by EricSchaefer
    If I cherry-pick from multiple branches, is there a simple way to figure out where the commit was coming from (e.g. the sha of the original commit)? Example: - at master branch - cherry pick commit A from a dev branch - A becomes D at the master branch Before: * B (master) Feature Y | * C (dev) Feature Z | * A Feature X |/ * 3 * 2 * 1 After: * D (master) Feature X * B Feature Y | * C (dev) Feature Z | * A Feature X |/ * 3 * 2 * 1 Is it possible to figure out that B was cherry-picked from A (aside from searching for the commit message)?

    Read the article

  • DVCS - What's the downside of rewriting unpublished history?

    - by user1447278
    I was wondering what in particular is the downside of "losing history" in a development process. One famous example is of course git rebase -i / git merge --squash, but also what is described here: http://fourkitchens.com/blog/2009/04/20/alternatives-rebasing-bazaar under "I want to clean up my commit history prior to submitting my changes to the mainline." I can see that exporting patches and applying them to another branch would lose the "history" of the branch, but why would that branch and its commit history be useful after it has been merged? Can someone elaborate on why such techniques are considered "dirty"? Why does it matter in which order changes were originally committed in the first place as long as they can be applied to the main branch?

    Read the article

  • git merge, git rebase none seems to work, should I delete my github fork and refork from the upstream master?

    - by Joan Yin
    I have to confess my github sins. 4 month ago, I forked a upstream repo, without knowing much of git and pull request, i did some work on master branch locally, later on I realized the mistake, created a new branch, and squashed the changes to one and successfully send a PR later from that branch. the PR is accepted, and I moved on. Now I need to submit another PR. But my master branch is so messed up, when I do merge, or rebase, there are so many mistakes. I probably committed a few more sins this morning. I have been battling this for the whole morning now. so it comes to the point that I want a clean start. Can I delete the github fork and refork from the upstream master? What are the correct steps?

    Read the article

  • Using a regex pattern to find revision numbers from a svn merge

    - by zyzy
    svn diff -rXX:HEAD Will give me a format like this, if there has been a merge between those revisions: Merged /<branch>:rXXX,XXX-XXX or Merged /<branch>:rXXX I'm not very familiar with regex and am trying to put together a pattern which will match all the numbers (merged revision numbers) AFTER matching the "Merged /branch:r" part. So far I have this to match the first part: [Mm]erged.*[a-zA-Z]:r Thanks in adv. for the help :)

    Read the article

  • Git Svn dcommit error - restart the commit

    - by Rob Wilkerson
    Last week, I made a number of changes to my local branch before leaving town for the weekend. This morning I wanted to dcommit all of those changes to the company's Svn repository, but I get a merge conflict in one file: Merge conflict during commit: Your file or directory 'build.properties.sample' is probably out-of-date: The version resource does not correspond to the resource within the transaction. Either the requested version resource is out of date (needs to be updated), or the requested version resource is newer than the transaction root (restart the commit). I'm not sure exactly why I'm getting this, but before attempting to dcommit, I did a git svn rebase. That "overwrote" my commits. To recover from that, I did a git reset --hard HEAD@{1}. Now my working copy seems to be where I expect it to be, but I have no idea how to get past the merge conflict; there's not actually any conflict to resolve that I can find. Any thoughts would be appreciated. EDIT: Just wanted to specify that I am working locally. I have a local branch for the trunk that references svn/trunk (the remote branch). All of my work was done on the local trunk: $ git branch maint-1.0.x master * trunk $ git branch -r svn/maintenance/my-project-1.0.0 svn/trunk Similarly, git log currently shows 10 commits on my local trunk since the last commit with a Svn ID. Hopefully that answers a few questions. Thanks again.

    Read the article

  • Tracking SVN changes through multiple merges

    - by Paul D.
    At work we use a branching strategy where all changes start off in a development branch, then subsequently make their way through one or more integration branches, and finally end up in a release branch. Occasionally (more often than I'd like) I find myself needing to figure out where a particular change originated (which development branch). In this case I have to spend a considerable amount of time playing detective to trace a change backwards through 2-3 merges. Am I missing an easy way to do this?

    Read the article

  • Do I need to perform a commit after a rebase?

    - by Benjol
    I've just rebased a feature branch onto another feature branch (in preparation for rebasing everything to the head of my master), and it involved quite a few tricky merge resolutions. Is the rebase automatically saved as a commit somewhere? Just where do those modifications live? I can't see anything in gitk, or git log --oneline. (Same question for when I merge back my branch after rebasing.)

    Read the article

  • Perforce Howto? Syncing/Merging files between branches.

    - by CodeToGlory
    (A) ------- (B) ----------- (C) | | | Trunk ReleaseBranch DeveloperBranch Developers work in the C branch and check-in all the files. The modified files are then labeled in the C branch. The binaries that get deployed are built from B branch and labeled. Currently all this is manual. In Perforce, is there a simple way to accomplish this like merging Branches based on labels etc?

    Read the article

  • Copying subversion commit messages

    - by Falcor
    I know this isn't the BEST practice, but every once in a while when I'm merging up a huge batch up changes with the trunk (and I know my branch is current), I will simply delete the contents of the trunk and then copy the contents of my branch up, so that I don't have to deal with resolving conflicts for an hour. The problem is that I seem to lose the entire history of commit messages for each file. My branch still has the correct history of commit messages... how can I merge them up?

    Read the article

  • does git have functionality lke cvs's rtag

    - by user1663987
    In CVS, we could programatically create a new branch of existing source using the "rtag" command, which did not require a copy of the repository. Does git support functionality of this kind, making a branch of existing files in a remote git repository without having a local copy of it? Or does the distributed nature of git preclude this? (I'm trying to save the 20+ minutes it would take to make a freestanding copy of the repository, just to run a 'git branch' command.)

    Read the article

  • How to track CVS Check-Ins

    - by Geek
    We recently created a branch out of the main branch of our code for the beta. Now I want to check what all files have changed in the branch in the last one week. How to get that information out of CVS. What is the command for this ?

    Read the article

  • Git development?production workflow – how to set up repo?

    - by Blixt
    I'm working on a relatively small, but fast-changing project (a web application) with a few other developers. We're using Git for source control. We started out creating a stable branch which is what is deployed to the live production web server. The master branch is what is deployed to a secondary "unstable" server for testing purposes. Whenever we felt that the master branch was ready to go live, we merged it into stable. However, we came to a point where we wanted one of the later master commits, but not some of the commits before it, so we used cherry-pick to pull that change into stable. This creates a new commit with the same change as the one in master, and it feels as if we're losing the nice history that Git otherwise provides. Are there better ways of handling this type of unstable/stable deployment model? One solution I thought of was using feature branches, and only ever merging a feature branch into master once we want it to go live. Then we'll tag every deployment instead of having a stable branch.

    Read the article

  • convert remote object result to array collection in flex...........

    - by user364199
    HI guys, im using zend_amf and flex. My problem is i have to populate my advance datagrid using array collection. this array collection have a children. example: [Bindable] private var dpHierarchy:ArrayCollection = new ArrayCollection([ {trucks:"Truck", children: [ {trucks:"AMC841", total_trip:1, start_time:'3:46:40 AM'}, {trucks:"AMC841", total_trip:1, start_time:'3:46:40 AM'}]) ]}; but the datasource of my datagrid should come from a database, how can i convert the result from remote object to array collection that has the same format like in my example, or any other way. here is my advance datagrid <mx:AdvancedDataGrid id="datagrid" width="500" height="200" lockedColumnCount="1" lockedRowCount="0" horizontalScrollPolicy="on" includeIn="loggedIn" x="67" y="131"> <mx:dataProvider> <mx:HierarchicalData id="dpHierarchytest" source="{dp}"/> </mx:dataProvider> <mx:groupedColumns> <mx:AdvancedDataGridColumn dataField="trucks" headerText="Trucks"/> <mx:AdvancedDataGridColumn dataField="total_trip" headerText="Total Trip"/> <mx:AdvancedDataGridColumnGroup headerText="PRECOOLING"> <mx:AdvancedDataGridColumnGroup headerText="Before Loading"> <mx:AdvancedDataGridColumn dataField="start_time" headerText="Start Time"/> <mx:AdvancedDataGridColumn dataField="end_time" headerText="End Time"/> <mx:AdvancedDataGridColumn dataField="precooling_time" headerText="Precooling Time"/> <mx:AdvancedDataGridColumn dataField="precooling_temp" headerText="Precooling Temp"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Before Dispatch"> <mx:AdvancedDataGridColumn dataField="bd_start_time" headerText="Start Time"/> <mx:AdvancedDataGridColumn dataField="bd_end_time" headerText="End Time"/> <mx:AdvancedDataGridColumn dataField="bd_precooling_time" headerText="Precooling Time"/> <mx:AdvancedDataGridColumn dataField="bd_precooling_temp" headerText="Precooling Temp"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumn dataField="remarks" headerText="Remarks"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Temperature Compliance"> <mx:AdvancedDataGridColumn dataField="total_hit" headerText="Total Hit"/> <mx:AdvancedDataGridColumn dataField="total_miss" headerText="Total Miss"/> <mx:AdvancedDataGridColumn dataField="cold_chain_compliance" headerText="Cold Chain Compliance"/> <mx:AdvancedDataGridColumn dataField="average_temp" headerText="Average Temp"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Productivity"> <mx:AdvancedDataGridColumn dataField="total_drop_points" headerText="Total Drop Points"/> <mx:AdvancedDataGridColumn dataField="total_delivery_time" headerText="Total Delivery Time"/> <mx:AdvancedDataGridColumn dataField="total_distance" headerText="Total Distance"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Trip Exceptions"> <mx:AdvancedDataGridColumn dataField="total_doc" headerText="Total DOC"/> <mx:AdvancedDataGridColumn dataField="total_eng" headerText="Total ENG"/> <mx:AdvancedDataGridColumn dataField="total_fenv" headerText="Total FENV"/> <mx:AdvancedDataGridColumn dataField="average_speed" headerText="Average Speed"/> </mx:AdvancedDataGridColumnGroup> </mx:groupedColumns> </mx:AdvancedDataGrid> Thanks, and i really need some help.

    Read the article

  • Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host

    - by xnoor
    i have an update server that sends client updates through TCP port 12000, the sending of a single file is successful only the first time, but after that i get an error message on the server "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host", if i restart the update service on the server, it works again only one, i have normal multithreaded windows service SERVER CODE namespace WSTSAU { public partial class ApplicationUpdater : ServiceBase { private Logger logger = LogManager.GetCurrentClassLogger(); private int _listeningPort; private int _ApplicationReceivingPort; private string _setupFilename; private string _startupPath; public ApplicationUpdater() { InitializeComponent(); } protected override void OnStart(string[] args) { init(); logger.Info("after init"); Thread ListnerThread = new Thread(new ThreadStart(StartListener)); ListnerThread.IsBackground = true; ListnerThread.Start(); logger.Info("after thread start"); } private void init() { _listeningPort = Convert.ToInt16(ConfigurationSettings.AppSettings["ListeningPort"]); _setupFilename = ConfigurationSettings.AppSettings["SetupFilename"]; _startupPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Substring(6); } private void StartListener() { try { logger.Info("Listening Started"); ThreadPool.SetMinThreads(50, 50); TcpListener listener = new TcpListener(_listeningPort); listener.Start(); while (true) { TcpClient c = listener.AcceptTcpClient(); ThreadPool.QueueUserWorkItem(ProcessReceivedMessage, c); } } catch (Exception ex) { logger.Error(ex.Message); } } void ProcessReceivedMessage(object c) { try { TcpClient tcpClient = c as TcpClient; NetworkStream Networkstream = tcpClient.GetStream(); byte[] _data = new byte[1024]; int _bytesRead = 0; _bytesRead = Networkstream.Read(_data, 0, _data.Length); MessageContainer messageContainer = new MessageContainer(); messageContainer = SerializationManager.XmlFormatterByteArrayToObject(_data, messageContainer) as MessageContainer; switch (messageContainer.messageType) { case MessageType.ApplicationUpdateMessage: ApplicationUpdateMessage appUpdateMessage = new ApplicationUpdateMessage(); appUpdateMessage = SerializationManager.XmlFormatterByteArrayToObject(messageContainer.messageContnet, appUpdateMessage) as ApplicationUpdateMessage; Func<ApplicationUpdateMessage, bool> HandleUpdateRequestMethod = HandleUpdateRequest; IAsyncResult cookie = HandleUpdateRequestMethod.BeginInvoke(appUpdateMessage, null, null); bool WorkerThread = HandleUpdateRequestMethod.EndInvoke(cookie); break; } } catch (Exception ex) { logger.Error(ex.Message); } } private bool HandleUpdateRequest(ApplicationUpdateMessage appUpdateMessage) { try { TcpClient tcpClient = new TcpClient(); NetworkStream networkStream; FileStream fileStream = null; tcpClient.Connect(appUpdateMessage.receiverIpAddress, appUpdateMessage.receiverPortNumber); networkStream = tcpClient.GetStream(); fileStream = new FileStream(_startupPath + "\\" + _setupFilename, FileMode.Open, FileAccess.Read); FileInfo fi = new FileInfo(_startupPath + "\\" + _setupFilename); BinaryReader binFile = new BinaryReader(fileStream); FileUpdateMessage fileUpdateMessage = new FileUpdateMessage(); fileUpdateMessage.fileName = fi.Name; fileUpdateMessage.fileSize = fi.Length; MessageContainer messageContainer = new MessageContainer(); messageContainer.messageType = MessageType.FileProperties; messageContainer.messageContnet = SerializationManager.XmlFormatterObjectToByteArray(fileUpdateMessage); byte[] messageByte = SerializationManager.XmlFormatterObjectToByteArray(messageContainer); networkStream.Write(messageByte, 0, messageByte.Length); int bytesSize = 0; byte[] downBuffer = new byte[2048]; while ((bytesSize = fileStream.Read(downBuffer, 0, downBuffer.Length)) > 0) { networkStream.Write(downBuffer, 0, bytesSize); } fileStream.Close(); tcpClient.Close(); networkStream.Close(); return true; } catch (Exception ex) { logger.Info(ex.Message); return false; } finally { } } protected override void OnStop() { } } i have to note something that my windows service (server) is multithreaded.. i hope anyone can help with this

    Read the article

  • USB Hub and Ubuntu

    - by aserwin
    I have a powered 7 port hub connected to my Ubuntu box and it does nothing. The devices (zip drive and web cam) work direct, but aren't recognized through the hub. This worked fine in Windows 7. I can't prove it is the OS because this is a new motherboard and processor. Any advice? EDIT : Output from lsusb -v Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:12.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0503 highspeed power enable connect Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:13.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:16.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0001 Self Powered Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:12.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:13.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:14.5 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 2 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Device Status: 0x0001 Self Powered Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:16.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0303 lowspeed power enable connect Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0001 Self Powered Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:02:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 2 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:02:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 31 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 bMaxBurst 0 Hub Descriptor: bLength 12 bDescriptorType 42 nNbrPorts 2 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere bHubDecLat 0.0 micro seconds wHubDelay 0 nano seconds DeviceRemovable 0x00 Hub Port Status: Port 1: 0000.02a0 5Gbps power Rx.Detect Port 2: 0000.02a0 5Gbps power Rx.Detect Binary Object Store Descriptor: bLength 5 bDescriptorType 15 wTotalLength 15 bNumDeviceCaps 1 SuperSpeed USB Device Capability: bLength 10 bDescriptorType 16 bDevCapabilityType 3 bmAttributes 0x00 Latency Tolerance Messages (LTM) Supported wSpeedsSupported 0x0008 Device can operate at SuperSpeed (5Gbps) bFunctionalitySupport 3 Lowest fully-functional device speed is SuperSpeed (5Gbps) bU1DevExitLat 3 micro seconds bU2DevExitLat 2047 micro seconds Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 04a9:1709 Canon, Inc. PIXMA MP150 Scanner Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x04a9 Canon, Inc. idProduct 0x1709 PIXMA MP150 Scanner bcdDevice 1.08 iManufacturer 1 Canon iProduct 2 MP150 iSerial 3 20BC24 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 62 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 3 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x07 EP 7 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x88 EP 8 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x89 EP 9 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 11 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 007 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x046d Logitech, Inc. idProduct 0xc517 LX710 Cordless Desktop Laser bcdDevice 38.10 iManufacturer 1 Logitech iProduct 2 USB Receiver iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 59 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 98mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 1 Keyboard iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 59 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 2 Mouse iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 177 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered) This is with the powered hub plugged in.

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Problem libc-so-6-version-glibc-2-14-not-found

    - by alessio
    i have installed nagios core 4 on ubuntu server 12.04 lts.. everything working fine,, but... i have a problem with remote command to a remote linux (ubuntu server 12.04) pc! when i try to check a service, for example: check_swap, check_disk etc.. i got everytime an error: Remote command execution failed: /home/nagios/plugins/check_disk: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by /home/nagios/plugins/check_disk) the remote pc is not my pc and i don't want to make a disaster! :) So... how can i fix this problem? any help be appreciate!!! ;) thanks in advance! :)

    Read the article

  • VDI ? Synergy

    - by katsumii
    ????·?????????&???? ????·???? - ????????????????????·????????????Oracle Sun Ray??????????????·?????????Sun Ray 3? ??Windows??????????·??????????Windows Server Remote Desktop Services??????????????Sun Ray Software?????????Oracle?????????????????????????????????NotePC????SunRay???2?????????????????????????????????????????????????SynergySynergy???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????????????????&????????Remote Desktop??????????????????????????????????????????????????????Bug #3002 - Mouse Pointer Invisible on Client PC - SynergyMouse pointer does not move.Remote Desktop ?????????????????????????RDP????Synergy???????????????????????????????????·???????Synergy?????????????????????????????????? 

    Read the article

  • Advanced Continuous Delivery to Azure from TFS, Part 1: Good Enough Is Not Great

    - by jasont
    The folks over on the TFS / Visual Studio team have been working hard at releasing a steady stream of new features for their new hosted Team Foundation Service in the cloud. One of the most significant features released was simple continuous delivery of your solution into your Azure deployments. The original announcement from Brian Harry can be found here. Team Foundation Service is a great platform for .Net developers who are used to working with TFS on-premises. I’ve been using it since it became available at the //BUILD conference in 2011, and when I recently came to work at Stackify, it was one of the first changes I made. Managing work items is much easier than the tool we were using previously, although there are some limitations (more on that in another blog post). However, when continuous deployment was made available, it blew my mind. It was the killer feature I didn’t know I needed. Not to say that I wasn’t previously an advocate for continuous delivery; just that it was always a pain to set up and configure. Having it hosted - and a one-click setup – well, that’s just the best thing since sliced bread. It made perfect sense: my source code is in the cloud, and my deployment is in the cloud. Great! I can queue up a build from my iPad or phone and just let it go! I quickly tore through the quick setup and saw it all work… sort of. This will be the first in a three part series on how to take the building block of Team Foundation Service continuous delivery and build a CD model that will actually work for any team deploying something more advanced than a “Hello World” example. Part 1: Good Enough Is Not Great Part 2: A Model That Works: Branching and Multiple Deployment Environments Part 3: Other Considerations: SQL, Custom Tasks, Etc Good Enough Is Not Great There. I’ve said it. I certainly hope no one on the TFS team is offended, but it’s the truth. Let’s take a look under the hood and understand how it works, and then why it’s not enough to handle real world CD as-is. How it works. (note that I’ve skipped a couple of steps; I already have my accounts set up and something deployed to Azure) The first step is to establish some oAuth magic between your Azure management portal and your TFS Instance. You do this via the management portal. Once it’s done, you have a new build process template in your TFS instance. (Image lifted from the documentation) From here, you’ll get the usual prompts for security, allowing access, etc. But you’ll also get to pick which Solution in your source control to build. Here’s what the bulk of the build definition looks like. All I’ve had to do is add in the solution to build (notice that mine is from a specific branch – Release – more on that later) and I’ve changed the configuration. I trigger the build, and voila! I have an Azure deployment a few minutes later. The beauty of this is that it’s all in the cloud and I’m not waiting for my machine to compile and upload the package. (I also had to enable the build definition first – by default it is created in disabled state, probably a good thing since it will trigger on every.single.checkin by default.) I get to see a history of deployments from the Azure portal, and can link into TFS to see the associated changesets and work items. You’ll notice also that this build definition also automatically put my code in the Staging slot of my Azure deployment – more on this soon. For now, I can VIP swap and be in production. (P.S. I hate VIP swap and “production” and “staging” in Azure. More on that later too.) That’s it. That’s the default out-of-box experience. Easy, right? But it’s full of room for improvement, so let’s get into that….   The Problems Nothing is perfect (except my code – it’s always perfect), and neither is Continuous Deployment without a bit of work to help it fit your dev team’s process. So what are the issues? Issue 1: Staging vs QA vs Prod vs whatever other environments your team may have. This, for me, is the big hairy one. Remember how this automatically deployed to staging rather than prod for us? There are a couple of issues with this model: If I want to deliver to prod, it requires intervention on my part after deployment (via a VIP swap). If I truly want to promote between environments (i.e. Nightly Build –> Stable QA –> Production) I likely have configuration changes between each environment such as database connection strings and this process (and the VIP swap) doesn’t account for this. Yet. Issue 2: Branching and delivering on every check-in. As I mentioned above, I have set this up to target a specific branch – Release – of my code. For the purposes of this example, I have adopted the “basic” branching strategy as defined by the ALM Rangers. This basically establishes a “Main” trunk where you branch off Dev and Release branches. Granted, the Release branch is usually the only thing you will deploy to production, but you certainly don’t want to roll to production automatically when you merge to the Release branch and check-in (unless you like the thrill of it, and in that case, I like your style, cowboy….). Rather, you have nightly build and QA environments, or if you’ve adopted the feature-branch model you have environments for those. Those are the environments you want to continuously deploy to. But that takes us back to Issue 1: we currently have a 1:1 solution to Azure deployment target. Issue 3: SQL and other custom tasks. Let’s be honest and address the elephant in the room: I need to get some sleep because I see an elephant in the room. But seriously, I can’t think of an application I have touched in the last 10 years that doesn’t need to consider SQL changes when deploying code and upgrading an environment. Microsoft seems perfectly content to ignore this elephant for now: yes, they’ve added Data Tier Applications. But let’s be honest with ourselves again: no one really uses it, and it’s not suitable for anything more complex than a Hello World sample project database. Why? Because it doesn’t fit well into a great source control story. Developers make stored procedure and table changes all day long while coding complex applications, and if someone forgets to go update the DACPAC before the automated deployment, you have a broken build until it’s completed. Developers – not just DBAs – also like to work with SQL in SQL tools, not in Visual Studio. I’m really picking on SQL because that’s generally the biggest concern that I hear. But we need to account for any custom tasks as well in the build process.   The Solutions… ? We’ve taken a look at how this all works, and addressed the shortcomings. In my next post (which I promise will be very, very soon), I will detail how I’ve overcome these shortcomings and used this foundation to create a mature, flexible model for deploying my app – any version, any time, to any environment.

    Read the article

  • Need help understanding a recursion example in Python

    - by Ali Mustafa
    Python is my first programming language, and I'm learning it from "How to Think Like a Computer Scientist". In Chapter 5 the author gives the following example on recursion: def factorial(n): if n == 0: return 1 else: recurse = factorial(n-1) result = n * recurse return result I understand that if n = 3, then the function will execute the second branch. But what I don't understand is what happens when the function enters the second branch. Can someone explain it to me?

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Git workflow for small teams

    - by janos
    I'm working on a git workflow to implement in a small team. The core ideas in the workflow: there is a shared project master that all team members can write to all development is done exclusively on feature branches feature branches are code reviewed by a team member other than the branch author the feature branch is eventually merged into the shared master and the cycle starts again The article explains the steps in this cycle in detail: https://github.com/janosgyerik/git-workflows-book/blob/small-team-workflow/chapter05.md Does this make sense or am I missing something?

    Read the article

  • Where are Credentials stored for Network Drives on WinXP?

    - by Tom Tresansky
    I have a drive mapped to a folder on a remote machine that I connect to using the Cisco VPN client. The password to the Windows account I use on that remote machine has changed. I had stored the username/password locally, using Window's remember my password feature, so I wouldn't have to enter it every time (the enter user/password login dialog used to appear each time I attempted to open the remote folder, and I would have to look up and enter my credentials). The password to that remote Windows account has changed. Now, I am no longer prompted to enter a user name / password, but instead, upon trying to open the remote folder, receive a message: unknown user name or bad password. How do I view and change these stored credentials?

    Read the article

  • Trouble Connecting to Virtual Machine after IP address Change

    - by David
    I have a VMware image running a copy of Fedora 11 which is hosted on a remote server. The remote server recently had its IP address change. I'm now unable to connect to my virtual machine. The server admin assures me that my virtual machine is running and assigned the new IP address. I have checked the firewalls and had the remote admin restart the VM instance. Neither of these fixed the problem. How do I troubleshoot a remote server which I am unable to SSH to? I'm actually even unable to ping the remote IP (connection timed out).

    Read the article

  • rsyslog appears to act on old configuration

    - by Jeff Lee
    I'm using a template to dynamically generate rsyslog filenames. I've made some changes from my original format, but rsyslog still appears to be using both the new template and the old after restarting. My filename template went from this: $template RemoteDailyLog,"/var/log/remote/%hostname%/%$year%/%$month%/%$day%.log" To this: $template RemoteDailyLog,"/var/log/remote/%hostname%/%fromhost-ip%/%$year%/%$month%/%$day%.log" I stopped rsyslogd using service rsyslog stop, deleted all of my log files using rm -rf /var/log/remote/*, and then restarted rsyslogd with service rsyslog start. The problem is rsyslog seems to be building folder structures of the type "/var/log/remote/%hostname%/%$year%/%$month%/%$day%.log" (i.e., without the remote IP), which no longer appears anywhere in my configuration. Is it possible that old log or config data have been cached somewhere and are being preserved through the server restart? This is creeping me out a little.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >