Search Results

Search found 464 results on 19 pages for 'segments'.

Page 15/19 | < Previous Page | 11 12 13 14 15 16 17 18 19  | Next Page >

  • Allignment of ext3 partition on LVM RAID volume group

    - by John P
    I'm trying to add a partition on a LVM that resides on a RAID6 volume group and fdisk is complaining about the partition not residing on a physical sector boundry. My question is, how do you calculate the correct starting sector for a partition on a LVM? This partition will be formated ext3. Would it be better to just format the LVM directly instead of creating a new partition? Disk /dev/dedvol/backup: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 1048576 bytes / 8388608 bytes Disk identifier: 0x4e428f49 Device Boot Start End Blocks Id System /dev/dedvol/backup1 63 267349 2146982827+ 83 Linux Partition 1 does not start on physical sector boundary. lvdisplay /dev/dedvol/backup --- Logical volume --- LV Name /dev/dedvol/backup VG Name dedvol LV UUID OV2n5j-7LHb-exJL-t8dI-dU8A-2vxf-uIicCt LV Write Access read/write LV Status available # open 0 LV Size 2.00 TiB Current LE 524288 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 32768 Block device 253:1 vgdisplay dedvol --- Volume group --- VG Name dedvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 14.55 TiB PE Size 4.00 MiB Total PE 3815448 Alloc PE / Size 3670016 / 14.00 TiB Free PE / Size 145432 / 568.09 GiB VG UUID 8fBcOk-aXGx-P3Qy-VVpJ-0zK1-fQgy-Cb691J

    Read the article

  • Simplest DNS solution for remote offices

    - by dunxd
    I look after a bunch of remote offices that connect via VPN - a Cisco ASA 5505 in each office acts as Firewall and VPN end point. Beyond that we keep things as simple as possible in the offices to minimise the support burden. We don't have any kind of server except in offices large enough to justify having someone dedicated to IT. Basically there is the ASA, some computers, a network printer and a switch. One of the problems I am seeing in a lot of offices is that DNS requests looking up hosts inside our network often fail - I'm assuming timeouts due to the offices internet connection (they are all in developing world countries) having some sub-optimal qualities (e.g. high latency caused by VSAT segments, or packet loss. The obvious solution to this is to have some sort of local DNS service that can serve local requests - so I think it would need to do zone transfers from our Microsoft Windows 2008 R2 DNS servers at HQ. However, simply installing Windows Servers in each office is both expensive, and creates a support burden. This got me thinking about pfsense/m0n0wall on embedded devices - those can act as a DNS server, and could be configured at HQ and sent out as just something that needs to be plugged into the network and can then be forgotten about by the staff locally. Maybe there are some alternatives to the ASA 5505 that include some DNS functionality. Has anyone here dealt with the problem, either using some kind of embedded device, or found some other solution? Any gotchas or reasons to avoid what I have suggested?

    Read the article

  • Almost All Xenserver Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top. edit2 Output of vgcfgrestore, but from the rescue system, as there is no root to chroot to. # vgcfgrestore --list VG_XenStorage-x-b4b0-x-x-408b91acdcae File: /etc/lvm/archive/VG_XenStorage-x-x-x-x-408b91acdcae_00000.vg VG name: VG_XenStorage-x-x-x-x-408b91acdcae Description: Created *before* executing '/sbin/vgscan --ignorelockingfailure --mknodes' Backup Time: Fri Jun 28 23:53:20 2013 File: /etc/lvm/backup/VG_XenStorage-x-x-x-x-408b91acdcae VG name: VG_XenStorage-x-x-x-x-408b91acdcae Description: Created *after* executing '/sbin/vgscan --ignorelockingfailure --mknodes' Backup Time: Fri Jun 28 23:53:20 2013

    Read the article

  • Windows 7 not booting up and stuck at startup repair

    - by mikimr
    I've been having issues with Windows 7 Home Premium on a Lenovo laptop. At first, it would not start up normally at all. I started it in Safe Mode, where I disabled all non-MS services and tried again to no avail. It then goes into Startup repair where it failed several times. I tried copying the original registry settings, still the same. I resorted to booting with an Ubuntu DVD, where I ran the boot-repair, where it is supposed to correct the Windows boot. No luck. I used Win7 DVD to start up from there, where I had the option to install or repair. I chose the repair, got into command prompt, ran chkdsk /i /r, where it found 3 unreadable segments, went through the 2nd step without issues, and the 3rd step completed with some errors (can't recall the exact errors). When I restarted the machine, it went to straight to the Stratup Repair, indicating "Attempting repairs... Repairing dis errors. This might take over an hour to complete." It's been like this for nearly 15 hours. When I try to cancel or close the Startup Repair window, I get a message "The current repair operation cannot be cancelled." Should I let it run or force shut the machine? If force shut, how can I resolve this problem? Thanks.

    Read the article

  • Almost All Logical Volumes Disappeared - Recovery?

    - by Alex
    We had a hard disc crash of one of two hard discs in a software raid with a LVM on top. The server is running Citrix xenserver. On the hard disk which is still intact, the volume group gets detected well, but only one LV is left. (some hashes replaced by "x") # lvdisplay --- Logical volume --- LV Name /dev/VG_XenStorage-x-x-x-x-408b91acdcae/MGT VG Name VG_XenStorage-x-x-x-x-408b91acdcae LV UUID x-x-x-x-x-x-vQmZ6C LV Write Access read/write LV Status available # open 0 LV Size 4.00 MiB Current LE 1 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 root@rescue ~ # vgdisplay --- Volume group --- VG Name VG_XenStorage-x-x-x-x-408b91acdcae System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 698.62 GiB PE Size 4.00 MiB Total PE 178848 Alloc PE / Size 1 / 4.00 MiB Free PE / Size 178847 / 698.62 GiB VG UUID x-x-x-x-x-x-53w0kL I could understand if a full physical volume is lost - but why only the logical volumes? Is there any explanation for this? Is there any way to recover the logical volumes? EDIT We are here in a rescue system. The problem is that the whole server does not boot (GRUB error 22) What we are trying to do is to access the root filesystem. But everything was in the LVM. We have only this: (parted) print Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 750GB 750GB primary boot, lvm And this 750GB LVM volume is exactly what we see on top.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Ado.net dataservices BeginExecuteBatch call works on development fails on production server with Obj

    - by Mike Morley
    We have an ado.net dataservices 1.0 call that is being passed to a [WebGet] service operation as a batch through BeginExecuteBatch. Everything works perfectly on our development server - we have the project configured to use IIS instead of the cassini web server to make it as close to our production server as we can. When we publish to the production server, all the service operations work perfectly except the batch call, which fails with Object does not match target type. . I have not been able to find any cause for this. I can even run a single non-batch style GET operation against the [WebGet] service by copying the URL used in the batch and pasting it in a browser. I have not been able to find any information to help me solve this - any guidance would be most appreciated. Thanks, Mike M. Error message From Fiddler: HTTP/1.1 500 Internal Server Error Content-Type: application/xml DataServiceVersion: 1.0; An error occurred while processing this request. Object does not match target type. System.Reflection.TargetException at System.Reflection.RuntimeMethodInfo.CheckConsistency(Object target) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Data.Services.RequestUriProcessor.CreateFirstSegment(IDataService service, String identifier, Boolean checkRights, String queryPortion, Boolean& crossReferencingUrl) at System.Data.Services.RequestUriProcessor.CreateSegments(String[] segments, IDataService service) at System.Data.Services.RequestUriProcessor.ProcessRequestUri(Uri absoluteRequestUri, IDataService service) at System.Data.Services.DataService`1.BatchDataService.HandleBatchContent(Stream responseStream)

    Read the article

  • How can I line up WPF items in a Horizontal WrapPanel so they line up based on an arbitrary vertical

    - by Scott Whitlock
    I'm trying to create a View in WPF and having a hard time figuring out how to set it up. Here's what I'm trying to build: My ViewModel exposes an IEnumerable property called Items Each item is an event on a timeline, and each one implements ITimelineItem The ViewModel for each item has it's own DataTemplate to to display it I want to display all the items on the timeline connected by a line. I'm thinking a WrapPanel inside of a ListView would work well for this. However, the height of each item will vary depending on the information it displays. Some items will have graphic objects right on the line (like a circle or a diamond, or whatever), and some have annotations above or below the line. So it seems complicated to me. It seems that each item on the timeline has to render its own segment of the line. That's fine. But the distance between the top of the item to the line (and the bottom of the item to the line) could vary. If one item has the line 50 px down from the top and the next item has the line 100 px down from the top, then the first item needs 50 px of padding so that the line segments add up. I think I could solve that problem, however, we only need to add padding if these two items are on the same line in the WrapPanel! Let's say there are 5 items and only room on the screen for 3 across... the WrapPanel will put the other two on the next line. That's ok, but that means only the first 3 need to pad together, and the last 2 need to pad together. This is what's giving me a headache. Is there another approach I could look at?

    Read the article

  • Javascript/Greasemonkey: search for something then set result as a value

    - by thewinchester
    Ok, I'm a bit of a n00b when it comes to JS (I'm not the greatest programmer) so please be gentle - specially if my questions been asked already somewhere and I'm too stupid to find the right answer. Self deprecation out of the way, let's get to the question. Problem There is a site me and a large group of friends frequently use which doesn't display all the information we may like to know - in this case an airline bookings site and the class of travel. While the information is buried in the code of the page, it isn't displayed anywhere to the user. Using a Greasemonkey script, I'd like to liberate this piece of information and display it in a suitable format. Here's the psuedocode of what I'm looking to do. Search dom for specified element define variables Find a string of text If found Set result to a variable Write contents to page at a specific location (before a specified div) If not found Do nothing I think I've achieved most of it so far, except for the key bits of: Searching for the string: The page needs to search for the following piece of text in the page HEAD: mileageRequest += "&CLASSES=S,S-S,S-S"; The Content I need to extract and store is between the second equals (=) sign and the last comma ("). The contents of this area can be any letter between A-Z. I'm not fussed about splitting it up into an array so I could use the elements individually at this stage. Writing the result to a location: Taking that found piece of text and writing it to another location. Code so far This is what I've come up so far, with bits missing highlighted. buttons = document.getElementById('buttons'); ''Search goes here var flightClasses = document.createElement("div"); flightClasses.innerHTML = '<div id="flightClasses"> ' + '<h2>Travel classes</h2>' + 'For the above segments, your flight classes are as follows:' + 'write result here' + '</div>'; main.parentNode.insertBefore(flightClasses, buttons); If anyone could help me, or point me in the right direction to finish this off I'd appreciate it.

    Read the article

  • Rewriting URL's in codeigniter with url_title()?

    - by Craig Ward
    I am rewriting my website with codeigniter and have something I want to do but not sure it is possible. I have a gallery on my site powered by the Flickr API. Here is an example of the code I use to display the Landscape pictures: <?php foreach ($landscapes->photoset->photo as $l->photoset->photo) : ?> <a >photoset->photo->farm ?>/<?php echo $l->photoset->photo->server ?>/<?php echo $l->photoset->photo->id ?>/<?php echo $l->photoset->photo->secret ?>/<?php echo $l->photoset->photo->title ?>'> <img class='f_thumb'>photoset->photo->farm ?>.static.flickr.com/<?php echo $l->photoset->photo->server ?>/<?php echo $l->photoset->photo->id ?>_<?php echo $l->photoset->photo->secret ?>_s.jpg' title='<?php echo $l->photoset->photo->title ?>' alt='<?php echo $l->photoset->photo->title ?>' /></a> <?php endforeach; ?> As you can see when a user clicks on a picture I pass over the Farm, Server, ID, Secret and Title elements using URI segments and build the page in the controller using $data['farm'] = $this->uri->segment(3); $data['server'] = $this->uri->segment(4); $data['id'] = $this->uri->segment(5); $data['secret'] = $this->uri->segment(6); $data['title'] = $this->uri->segment(7); Everything works and is fine but the URL’s are a tad long, example “http://localhost:8888/wip/index.php/gallery/focus/3/2682/4368875046/e8f97f61d9/Old Mill House in Donegal” Is there a way to rewrite the URL so its more like “http://localhost:8888/wip/index.php/gallery/focus/Old_Mill_House_in_Donegal” I was looking at using: $url_title = $this->uri->segment(7); $url_title = url_title($url_title, 'underscore', TRUE); But I don’t seem to be able to get it to work. Any ideas?

    Read the article

  • Best way to manipulate and compare strings

    - by vtortola
    I'm developing a REST service, so a request could be something like this: /Data/Main/Table=Customers/ I need to get the segments one by one, and for each segment I will decide wich object I'm going to use, after I'll pass to that object the rest of the query so it can decide what to do next. Basically, the REST query is a path on a tree :P This imply lots String operations (depending on the query complexity), but StringBuilder is useful just for concatenations and remove, you cannot perform a search with IndexOf or similar. I've developed this class that fulfill my requirement, but the problem is that is manipulating Strings, so every time I get one segment ... I'll create extra Strings because String is an inmutable data type: public class RESTQueryParser { String _query; public RESTQueryParser(String query) { _query = query; } public String GetNext() { String result = String.Empty; Int32 startPosition = _query.StartsWith("/", StringComparison.InvariantCultureIgnoreCase) ? 1 : 0; Int32 i = _query.IndexOf("/", startPosition, StringComparison.InvariantCultureIgnoreCase) - 1; if (!String.IsNullOrEmpty(_query)) { if (i < 0) { result = _query.Substring(startPosition, _query.Length - 1); _query = String.Empty; } else { result = _query.Substring(startPosition, i); _query = _query.Remove(0, i + 1); } } return result; } } The server should support a lot of calls, and the queries could be huge, so this is gonna be a very repetitive task. I don't really know how big is the impact on the memory and the performance, I've just readed about it in some books. Should I implement a class that manage a Char[] instead Strings and implement the methods that I want? Or should be ok with this one? Regular expressions maybe?

    Read the article

  • BMP2AVI program in matlab

    - by ariel
    HI I wrote a program that use to work (swear to god) and has stopped from working. this code takes a series of BMPs and convert them into avi file. this is the code: path4avi='C:/FadeOutMask/'; %dont forget the '/' in the end of the path pathOfFrames='C:/FadeOutMask/'; NumberOfFiles=1; NumberOfFrames=10; %1:1:(NumberOfFiles) for i=0:1:(NumberOfFiles-1) FileName=strcat(path4avi,'FadeMaskAsael',int2str(i),'.avi') %the generated file aviobj = avifile(FileName,'compression','None'); aviobj.fps=10; for j=0:1:(NumberOfFrames-1) Frame=strcat(pathOfFrames,'MaskFade',int2str(i*10+j),'.bmp') %not a good name for thedirectory [Fa,map]=imread(Frame); imshow(Fa,map); F=getframe(); aviobj=addframe(aviobj,F) end aviobj=close(aviobj); end And this is the error I get: ??? Error using ==> checkDisplayRange at 22 HIGH must be greater than LOW. Error in ==> imageDisplayValidateParams at 57 common_args.DisplayRange = checkDisplayRange(common_args.DisplayRange,mfilename); Error in ==> imageDisplayParseInputs at 79 common_args = imageDisplayValidateParams(common_args); Error in ==> imshow at 199 [common_args,specific_args] = ... Error in ==> ConverterDosenWorkd at 19 imshow(Fa,map); for some reason I cant put it as code segments. sorry thank you Ariel

    Read the article

  • Safe to update separate regions of a BufferedImage in separate threads?

    - by finnw
    I have a collection of BufferedImage instances, one main image and some subimages created by calling getSubImage on the main image. The subimages do not overlap. I am also making modifications to the subimage and I want to split this into multiple threads, one per subimage. From my understanding of how BufferedImage, Raster and DataBuffer work, this should be safe because: Each instance of BufferedImage (and its respective WritableRaster) is accessed from only one thread. The shared ColorModel is immutable The DataBuffer has no fields that can be modified (the only thing that can change is elements of the backing array.) Modifying disjoint segments of an array in separate threads is safe. However I cannot find anything in the documentation that says that it is definitely safe to do this. Can I assume it is safe? I know that it is possible to work on copies of the child Rasters but I would prefer to avoid this because of memory constraints. Otherwise, is it possible to make the operation thread-safe without copying regions of the parent image?

    Read the article

  • detecting pauses in a spoken word audio file using pymad, pcm, vad, etc

    - by james
    First I am going to broadly state what I'm trying to do and ask for advice. Then I will explain my current approach and ask for answers to my current problems. Problem I have an MP3 file of a person speaking. I'd like to split it up into segments roughly corresponding to a sentence or phrase. (I'd do it manually, but we are talking hours of data.) If you have advice on how to do this programatically or for some existing utilities, I'd love to hear it. (I'm aware of voice activity detection and I've looked into it a bit, but I didn't see any freely available utilities.) Current Approach I thought the simplest thing would be to scan the MP3 at certain intervals and identify places where the average volume was below some threshold. Then I would use some existing utility to cut up the mp3 at those locations. I've been playing around with pymad and I believe that I've successfully extracted the PCM (pulse code modulation) data for each frame of the mp3. Now I am stuck because I can't really seem to wrap my head around how the PCM data translates to relative volume. I'm also aware of other complicating factors like multiple channels, big endian vs little, etc. Advice on how to map a group of pcm samples to relative volume would be key. Thanks!

    Read the article

  • Flipping View transition becomes clunky when using UIPickerViews with a large number of elements

    - by user302246
    I have a very simple app created with the Utility Application project template on XCode. My MainView has two UIPickerView components and two buttons. The FlipSideView has another UIPickerView. The pickers on the main view each have 4 segments and each segment has 8 rows. The picker on the flip side has just 1 segment with 8 rows. All rows on all pickers are just text. With just this setup, pressing the button to flip the view back and forth displays a noticeable delay before the animation actually starts, and then the animation actually seems to go faster than what it should, like it's trying to make up for the lost time. I removed the pickers in interface builder and loaded the app on the phone and the animation now seems natural. I also tried just one picker (the flipside one) and things still seem normal. So my current theory is that the number of objects involved in the main view is the cause. The thing is that I don't think it's that many (4 x 8 x 2 = 64), but I could be completely wrong. This is pretty much my first app so maybe I'm just doing something grossly wrong, or maybe the phone is has a lot more limited processing than I thought. I am thinking of creating the picker views with pickerView:viewForRow:forComponent:reusingView: to see if this hopefully performs better, but I'm not sure if this is just a waste of time. Any suggestions? Thanks Ruy P.S.: Testing on a 3G phone on 3.1.2

    Read the article

  • Pointer Implementation Details in C

    - by Will Bickford
    I would like to know architectures which violate the assumptions I've listed below. Also I would like to know if any of the assumptions are false for all architectures (i.e. if any of them are just completely wrong). sizeof(int *) == sizeof(char *) == sizeof(void *) == sizeof(func_ptr *) The in-memory representation of all pointers for a given architecture is the same regardless of the data type pointed to. The in-memory representation of a pointer is the same as an integer of the same bit length as the architecture. Multiplication and division of pointer data types are only forbidden by the compiler. NOTE: Yes I know this is nonsensical. What I mean is - is there hardware support to forbid this incorrect usage? All pointer values can be casted to a single integer. In other words, what architectures still make use of segments and offsets? Incrementing a pointer is equivalent to adding sizeof(the pointed data type) to the memory address stored by the pointer. If p is an int32* then p+1 is equal to the memory address 4 bytes after p. I'm most used to pointers being used in a contiguous, virtual memory space. For that usage, I can generally get by thinking of them as addresses on a number line. See (http://stackoverflow.com/questions/1350471/pointer-comparison/1350488#1350488).

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • implementing stretchable dialog borders in iphone sdk

    - by Joey
    Hi, I want to implement dialog borders that scale to the size I require the dialog to be. Perhaps there is a better more conventional name for this sort of thing. If there is, if someone would edit the title, that'd be great. Anyhow, I'd like to do this so I can have dialogs of any size without the visual artifacts that come with scaling border art to small, large, or wacky unproportional dimentions. I have a few ideas on how this is done, but am not sure which is better for iphone. I have a few questions. 1) Should I make a containing view object that basically overloads its drawRect method and draws the images where they should be at their appropriate scale when the method is called, or should I main a containing view object that simply contains 8 UIImageViews? I suspect the latter approach won't work if I need to actively scale the resulting dialog class like in an animation. 1b) If overloading drawRect is the way to go, does someone have some sample code or a link to an example that demonstrates drawing an image directly from drawRect()? 2) Is it generally better to create a) a 3 x 3 image where the segments are in their appropriate 1x1 grid of the image? If so, is it simple to draw from a portion of this image onto my target view in drawRect (if the former assumption is correct that I should use drawRect)? b) The pieces separately in 8 different files?

    Read the article

  • Zend_Controller_Router_Route: Could not find a translator

    - by Antonio
    I am developing a multilanguage application. In the bootstrap there is the routes setup: protected function _initRoutes() { $this->bootstrap('frontController'); $router = $this->frontController->getRouter(); // PAGES ROUTE $page = new Zend_Controller_Router_Route( ':language/:ident', array( 'module' => 'core', 'controller' => 'pagine', 'action' => 'view' ), array( 'ident' => '[a-zA-Z-_0-9]{3,}', 'language' => '[a-z]{2}' ) ); $registrazione = new Zend_Controller_Router_Route( ':language/@utenti/@registrati', array( 'module' => 'core', 'controller' => 'utenti', 'action' => 'registrazione' ), array( 'language' => '[a-z]{2}' ) ); $router->addRoute('page', $page); $router->addRoute('registrazione', $registrazione); ..... } I cannot set the default translator to Zend_Controller_Router_Route (for translated segments) because i don't know the language parameter in the request object. I get the language parameter in Multilanguage Plugin during the "routeShutdown": class Activa_Controller_Plugin_Multilanguage extends Zend_Controller_Plugin_Abstract { public function routeShutdown(Zend_Controller_Request_Abstract $request) { $language = $request->getParam("language"); $locale = new Zend_Locale($language); $translate = new Zend_Translate('array', APPLICATION_PATH.'/config/lang/'.$language.'.php', $locale); Zend_Registry::set('Zend_Locale', $locale); Zend_Registry::set('Zend_Translate', $translate); Zend_Controller_Router_Route::setDefaultTranslator($translate); //////////////////////// // BUT NOW IS TOO LATE //////////////////////// } When i type the address "http://servername/it/utenti/registrati" i get the exception with the message "Could not find a translator". How can i fix it? Antonio (Italy)

    Read the article

  • Detecting one point's location compared to two other points.

    - by WizardOfOdds
    Hi all, you can check my profile, this is not homework. I've got an interesting little problem to solve in a very real software and I'm looking for an easy way to solve it. I've got two fixed points on screen (they're fixed, but I don't know beforehand their position) that are not at the same location. These two fixed points form an imaginary line. Now I've got a third point that is "on one side" of that line (it cannot be on the line). The user can grab the point (the user actually grab an object, whose I track by its center, which is the point I'm interested in) and drag it. But it cannot "cross" the imaginary line. What is the easiest way to detect if the user is crossing the imaginary line? Example: a c / / (c cannot be dragged here) / b Or: c b -------------- a (c cannot be dragged here) So what is an easy to detect if c is staying on the correct "side" of the line (I draw segments here, but it really can be thought of as a line). One way to detect this is to take the destination point d and see if segment (c,d) intersects with line (a,b), but isn't there an easier way? Can't I just do some 2D dot-product magic here and have basically a one or two liner solving my issue?

    Read the article

  • NSSegmentedControl -selectedSegment always returns 0

    - by SphereCat1
    I have an NSSegmentedControl with two segments set to "Select None" mode in Interface Builder. No matter what I try, I can't get -selectedSegment to return anything but 0, even though segment 0 is even disabled by default and can't possibly be selected. Here's the relevant function that gets called when you click any segment on the control: -(IBAction)changeStep:(id)sender { [stepContainer setHidden:TRUE]; [(NSView *)[[wizard stepArray] objectAtIndex:(NSInteger)[wizard step]] removeFromSuperview]; switch ([[navigationButton cell] selectedSegment]) { case 0: [wizard setStep:(NSInteger *)[wizard step]-1]; break; case 1: [wizard setStep:(NSInteger *)[wizard step]+1]; break; default: break; } //[[navigationButton cell] setSelected:FALSE forSegment:[navigationButton selectedSegment]]; if ([wizard step] > 0) { [wizard setStep:0]; [navigationButton setEnabled:FALSE forSegment:0]; } NSLog(@"%d", [wizard step]); [stepContainer addSubview:(NSView *)[[wizard stepArray] objectAtIndex:(NSInteger)[wizard step]]]; [stepContainer setHidden:FALSE withFade:TRUE]; } I've also tried using -isSelectedForSegment, but it has the same result. Any help you can provide would be awesome, I have no idea what I'm doing wrong. Thanks! SphereCat1

    Read the article

  • Java ternary operator and boxing Integer/int?

    - by Markus
    I tripped across a really strange NullPointerException the other day caused by an unexpected type-cast in the ternary operator. Given this (useless exemplary) function: Integer getNumber() { return null; } I was expecting the following two code segments to be exactly identical after compilation: Integer number; if (condition) { number = getNumber(); } else { number = 0; } vs. Integer number = (condition) ? getNumber() : 0; . Turns out, if condition is true, the if-statement works fine, while the ternary opration in the second code segment throws a NullPointerException. It seems as though the ternary operation has decided to type-cast both choices to int before auto-boxing the result back into an Integer!?! In fact, if I explicitly cast the 0 to Integer, the exception goes away. In other words: Integer number = (condition) ? getNumber() : 0; is not the same as: Integer number = (condition) ? getNumber() : (Integer) 0; . So, it seems that there is a byte-code difference between the ternary operator and an equivalent if-else-statement (something I didn't expect). Which raises three questions: Why is there a difference? Is this a bug in the ternary implementation or is there a reason for the type cast? Given there is a difference, is the ternary operation more or less performant than an equivalent if-statement (I know, the difference can't be huge, but still)?

    Read the article

  • Packet fragmentation when sending data via SSLStream

    - by Ive
    When using an SSLStream to send a 'large' chunk of data (1 meg) to a (already authenticated) client, the packet fragmentation / dissasembly I'm seeing is FAR greater than when using a normal NetworkStream. Using an async read on the client (i.e. BeginRead()), the ReadCallback is repeatedly called with exactly the same size chunk of data up until the final packet (the remainder of the data). With the data I'm sending (it's a zip file), the segments happen to be 16363 bytes long. Note: My receive buffer is much bigger than this and changing it's size has no effect I understand that SSL encrypts data in chunks no bigger than 18Kb, but since SSL sits on top of TCP, I wouldn't think that the number of SSL chunks would have any relevance to the TCP packet fragmentation? Essentially, the data is taking about 20 times longer to be fully read by the client than with a standard NetworkStream (both on localhost!) What am I missing? EDIT: I'm beginning to suspect that the receive (or send) buffer size of an SSLStream is limited. Even if I use synchronous reads (i.e. SSLStream.Read()), no more data ever becomes available, regardless of how long I wait before attempting to read. This would be the same behavior as if I were to limit the receive buffer to 16363 bytes. Setting the Underlying NetworkStream's SendBufferSize (on the server), and ReceiveBufferSize (on the client) has no effect.

    Read the article

  • Spring MVC: How to get the remaining path after the controller path?

    - by Willis Blackburn
    I've spent over an hour trying to find the answer to this question, which seems like it should reflect a common use case, but I can't figure it out! Basically I am writing a file-serving controller using Spring MVC. The URLs are of the format http://www.bighost.com/serve/the/path/to/the/file.jpg, in which the part after "/serve" is the path to the requested file, which may have an arbitrary number of path segments. I have a controller like this: @Controller class ServerController { @RequestMapping(value = "/serve/???") public void serve(???) { } } What I am trying to figure out is: What do I use in place of "???" to make this work? I have two theories about how this should work. The first theory is that I could replace the first "???" in the RequestMapping with a path variable placeholder that has some special syntax meaning "capture to the end of the path." If a regular placeholder looks like "{path}" then maybe I could use "{path:**}" or "{path:/}" or something like that. Then I could use a @PathVariable annotation to refer to the path variable in the second "???". The other theory is that I could replace the first "???" with "**" (match anything) and that Spring would give me an API to obtain the remainder of the path (the part matching the "**"). But I can't find such an API. Thanks for any help you can provide!

    Read the article

  • How to find which delimiter was used during string split (VB.NET)

    - by typoknig
    Hi all, lets say I have a string that I want to split based on several characters, like ".", "!", and "?". How do I figure out which one of those characters split my string so I can add that same character back on to the end of the split segments in question? Dim linePunctuation as Integer = 0 Dim myString As String = "some text. with punctuation! in it?" For i = 1 To Len(myString) If Mid$(entireFile, i, 1) = "." Then linePunctuation += 1 Next For i = 1 To Len(myString) If Mid$(entireFile, i, 1) = "!" Then linePunctuation += 1 Next For i = 1 To Len(myString) If Mid$(entireFile, i, 1) = "?" Then linePunctuation += 1 Next Dim delimiters(3) As Char delimiters(0) = "." delimiters(1) = "!" delimiters(2) = "?" currentLineSplit = myString.Split(delimiters) Dim sentenceArray(linePunctuation) As String Dim count As Integer = 0 While linePunctuation > 0 sentenceArray(count) = currentLineSplit(count)'Here I want to add what ever delimiter was used to make the split back onto the string before it is stored in the array.' count += 1 linePunctuation -= 1 End While

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19  | Next Page >