Search Results

Search found 14617 results on 585 pages for 'non breaking spaces'.

Page 196/585 | < Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >

  • CodePlex Daily Summary for Thursday, May 29, 2014

    CodePlex Daily Summary for Thursday, May 29, 2014Popular ReleasesQuickMon: Version 3.13: 1. Adding an Audio/sound notifier that can be used to simply draw attention to the application of a warning pr error state is returned by a collector. 2. Adding a property for Notifiers so it can be set to 'Attended', 'Unattended' or 'Both' modes. 3. Adding a WCF method to remote agent host so the version can be checked remotely. 4. Adding some 'Sample' monitor packs to installer. Note: this release and the next release (3.14 aka Pie release) will have some breaking changes and will be incom...fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksXsemmel - XML Editor and Viewer: 29-MAY-2014: WINDOWS XP IS NO LONGER SUPPORTED If you need support for WinXP, download release 15-MAR-2014 instead. FIX: Some minor issues NEW: Better visualisation of validation issues NEW: Printing CHG: Disabled Jumplist CHG: updated to .net 4.5, WinXP NO LONGER SUPPORTEDSPART (SharePoint Admin & Reporting Tool): Installation Kit V1.1: Installation Kit SPART V1.1 This release covers, • Site Size • Count - Sites, Site Collection, Document • Site collection quota information • Site/Web apps / Site collection permission which seeks URL as input/ • Last content change for sites which displays the time stampPerformance Analyzer for Microsoft Dynamics: DynamicsPerf 1.20: Version 1.20 Improved performance in PERFHOURLYROWDATA_VW Fixed error handling encrypted triggers Added logic ACTIVITYMONITORVW to handle Context_Info for Dynamics AX 2012 and above with this flag set on AOS Added logic to optional blocking to handle Context_Info for Dynamics AX 2012 and above with this flag set on AOS Added additional queries for investigating blocking Added logic to collect Baseline capture data (NOTE: QUERY_STATS table has entire procedure cache for that db during...Toolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesClosedXML - The easy way to OpenXML: ClosedXML 0.71.1: More performance improvements. It's faster and consumes less memory.Dynamics CRM Rich UX: RichUX Managed Solution File v0.4: Added format type attribute so icons, CSS and colors may be defined by the retrieved entity record. Also added samples in documentation on setting up FetchXML and tabs. Only for demo / experimenting. Do not use in production without extensive testing. Please help make this package better by reporting all issues.Fluentx: Fluentx v1.4.0: Added object to object mapper, added new NotIn extention method, and added documentation to library with fluentx.xmlVK.NET - Vkontakte API for .NET: VkNet 1.0.5: ?????????? ????? ??????.Kartris E-commerce: Kartris v2.6002: Minor release: Double check that Logins_GetList sproc is present, sometimes seems to get missed earlier if upgrading which can give error when viewing logins page Added CSV and TXT export option; this is not Google Products compatible, but can give a good base for creating a file for some other systems such as Amazon Fixed some minor combination and options issues to improve interface back and front Turn bitcoin and some other gateways off by default Minor CSS changes Fixed currenc...SimCityPak: SimCityPak 0.3.1.0: Main New Features: Fixed Importing of Instance Names (get rid of the Dutch translations) Added advanced editor for Decal Dictionaries Added possibility to import .PNG to generate new decals Added advanced editor for Path display entriesTiny Deduplicator: Tiny Deduplicator 1.0.1.0: Increased version number to 1.0.1.0 Moved all options to a separate 'Options' dialog window. Allows the user to specify a selection strategy which will help when dealing with large numbers of duplicate files. Available options are "None," "Keep First," and "Keep Last"SEToolbox: SEToolbox 01.031.009 Release 1: Added mirroring of ConveyorTubeCurved. Updated Ship cube rotation to rotate ship back to original location (cubes are reoriented but ship appears no different to outsider), and to rotate Grouped items. Repair now fixes the loss of Grouped controls due to changes in Space Engineers 01.030. Added export asteroids. Rejoin ships will merge grouping and conveyor systems (even though broken ships currently only maintain the Grouping on one part of the ship). Installation of this version wi...Player Framework by Microsoft: Player Framework for Windows and WP v2.0: Support for new Universal and Windows Phone 8.1 projects for both Xaml and JavaScript projects. See a detailed list of improvements, breaking changes and a general overview of version 2 ADDITIONAL DOWNLOADSSmooth Streaming Client SDK for Windows 8 Applications Smooth Streaming Client SDK for Windows 8.1 Applications Smooth Streaming Client SDK for Windows Phone 8.1 Applications Microsoft PlayReady Client SDK for Windows 8 Applications Microsoft PlayReady Client SDK for Windows 8.1 Applicat...TerraMap (Terraria World Map Viewer): TerraMap 1.0.6: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Added the ability to select multiple highlighted block types. Added a dynamic, interactive highlight opacity slider, making it easier to find highlighted tiles with dark colors (and fixed blurriness from 1.0.5 alpha). Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Fixed inst...DotNet.Highcharts: DotNet.Highcharts 4.0 with Examples: DotNet.Highcharts 4.0 Tested and adapted to the latest version of Highcharts 4.0.1 Added new chart type: Heatmap Added new type PointPlacement which represents enumeration or number for the padding of the X axis. Changed target framework from .NET Framework 4 to .NET Framework 4.5. Closed issues: 974: Add 'overflow' property to PlotOptionsColumnDataLabels class 997: Split container from JS 1006: Series/Categories with numeric names don't render DotNet.Highcharts.Samples Updated s...Extended WPF Toolkit™ Community Edition: Extended WPF Toolkit - 2.2.0: What's new in v2.2.0 Community Edition? Improvements and bug fixes Two new free controls: TimeSpanUpDown and RangeSlider 15 bug fixes and improvements (See the complete list of improvements in v2.2.0). Updated Live Explorer app available online as a Click Once app. Try it now! Want an easier way to install the Extended WPF Toolkit? The Extended WPF Toolkit is available on Nuget. .NET Framework notes:Requires .NET Framework 4.0 or 4.5. A build for .NET 3.5 is available but also requires ...New Projects2112110026: OOP- Lê Th? Xuân HuongASP.Net Controls Extended: ASP controls' look modified and behavior extended.Audio Tools: The project is intended to capture common knowledge about popular audio file formats and related stuff.Dnn Bootstrap Helpers: Dnn Bootstrap helpers ( Tabs, Accordion & Carousel )itouch - JS touch library for browser: this project hosts a JavaScript library which enables you to handle user's touch gestures like swiping, pinching, clicking on your web app cross platform/deviceMySQL Powershell Library: This PowerShell Module attempt to provide a convenient methods for working with MySQL. It make use of the Oracle MySQL .Net connector version 6.8.3Performance Analyzer for Microsoft Dynamics: Performance Analyzer for Microsoft Dynamics is a toolset developed by Premier Field Engineering at Microsoft for resolving performance issues with Dynamics PRISA NEW: LIBRERIA PRISAQFix Rx: This project aims at enabling Rx based programming for Quick FIX / n API by pushing events to Quick FIX / n API and subscribing to to event feeds from the API.Riccsson.System a C# .NET library for C++: Riccsson.System is a C#-like library for C++ with support of Events, Delegates, Properties, Threading, Locking, and more. For easier to port C# libraries to C++SusicoTrader: A F# / C# based trading API with connections to IB and QuickFix/n API. TestCodePlex: testTP2Academia.net: .net projectoTypeScriptTD: TypeScriptTD is a tower defense game written in TypeScript with help of the Phaser game engine. It is a port of ScriptTD http://scripttd.codeplex.com/Universal Autosave: Universal Autosave (UA) is extension for DNN Platform. It allows easy and fast to configure autosave functionality for any form/control without any coding.WeatherView: A universal Windows app written in C# demonstrating geolocation and webservices.

    Read the article

  • New Certification Exam: "Oracle Database 12c: SQL Fundamentals" Released (1Z0-061)

    - by Brandye Barrington
    Oracle Certification begins testing this week for the new Oracle Database 12c Administrator Certified Associate (OCA) certification.  Testing for the Oracle Database 12c: SQL Fundamentals (1Z0-061) exam is now underway. Visit pearsonvue.com/oracle and register for exam 1Z0-061. You can get all preparation details, including exam objectives, number of questions, time allotments, and pricing on the Oracle Certification Website. Earning the Oracle Database 12c Administrator Certified Associate (OCA) credential demonstrates that you carry the foundational knowledge and skills needed to administer the Oracle Database, and sets the stage for your future progression to Oracle Database 12c Administrator Certified Professional (OCP). With Oracle Database 12c, you will experience the benefits of an Oracle Database that is re-engineered for Cloud computing. Multitenant architecture brings enterprises unprecedented hardware and software efficiencies, performance and manageability benefits, and fast and efficient Cloud provisioning. Oracle Database 12c certifications emphasize the full set of skills that DBAs need in today's competitive marketplace. Be among the first to obtain this ground breaking new Oracle Certified Associate (OCA) certification by registering for this exam today. QUICK LINKS Certification Path: Oracle Database 12c Administrator Certified Associate (OCA) Certification Exam: Oracle Database 12c: SQL Fundamentals (1Z0-061) Registration: pearsonvue.com/oracle

    Read the article

  • Ubuntu NTP issues

    - by Anups
    I am trying to setup the NTP server on Ubuntu machine. Am breaking my head in this particular issue. Getting an error ntpdate[5005]: no server suitable for synchronization found when doing the command ntppdate. Can anyone please help me out in this? /etc/ntp.conf: server 0.ubuntu.pool.ntp.org server 1.ubuntu.pool.ntp.org server 2.ubuntu.pool.ntp.org server 3.ubuntu.pool.ntp.org Also when I gave command netstat -anltp | grep "LISTEN" tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 1816/dnsmasq tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 939/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1013/cupsd tcp 0 0 127.0.0.1:39558 0.0.0.0:* LISTEN 5529/rsession tcp 0 0 0.0.0.0:902 0.0.0.0:* LISTEN 1275/vmware-authdla tcp 0 0 127.0.0.1:47304 0.0.0.0:* LISTEN 5822/rsession tcp6 0 0 :::80 :::* LISTEN 1400/apache2 tcp6 0 0 :::22 :::* LISTEN 939/sshd tcp6 0 0 ::1:631 :::* LISTEN 1013/cupsd So what should I do so that it listens on 123? If I get output as PORT STATE SERVICE 123/udp open ntp If I give command nmap -p 123 -sU -P0 192.168.36.198, it means UDP is open right? Then why doesn't it show in the command to to show listening ports?

    Read the article

  • Advice and resources on collaborative environments

    - by Tjaart
    I need some advice on collaborative software environments. More specifically, I am looking for books and reference materials that can aid me in understanding team and code structures and the interactions thereof. In other words books, blogs or white papers explaining: Different strategies for structuring teams that share common code between each other but have distinct individual functions? To summarise my question I would like to know what would be a good source of knowledge if I were to set up teams in an organisation that shared code but each unit still remained autonomous. I have done some research on this subject and explored: code review tools, distributed VCS, continuous integration tools, Unit testing automation. The tough part about implementing these tools are to determine where a good place would be to start, which tools are low hanging fruit, which tools or methods provide higher success rates. If someone asks me about code quality reference I point them to Code Complete. I am looking for an equivalent guide on software team structures and tools to make this equation work better. I realise that this question is quite vague but it arose as "we need to share code between teams without breaking each others stuff and causing management headaches and reams of red tape" The answer is definitely not simple and requires changes on many levels, hence the question. If the question is too vague please vote to close or delete. I would accept any good starting point as an answer.

    Read the article

  • Is there a product planning tool that has these specific features? [closed]

    - by acjohnson55
    I am working on a web startup in the early stages, and we are struggling a bit to manage the scope and scheduling of our product. We have loads of high-level features in the pipeline, but we need a good way of scheduling them for release iterations and breaking them into actual tasks that can be scheduled (that could be a separate tool, but integration would be preferred). I would say that our product can be pretty cleanly divided into "aspects", and we want to be able to separate features by the aspect to which they apply. Perhaps most importantly, it should be really simple to create and move features between target release points. We don't have physical space for a war room type setup, so whatever we settle upon should ideally have a cloud-type web interface. Right now, we're using Excel to make a grid of product aspects vs. target releases, and we store features at the intersections. But this is not providing a good way of indexing tasks to those features or being able to move them around. I would much rather have something that automates the grid overview. I'm less interested in something that helps with low-level scheduling than I am in something that is good at organizing the product plan at the long-term, high-level view. Is there a product planning tool out there that matches these specifications?

    Read the article

  • Running Outlook from VB with multiple email addresses [migrated]

    - by Mac
    I am sending emails from my VB6 system and I am having problems with sending a single email to various email addresses. The code is as follows: On Error Resume Next Err.Clear Set oOutLookObject = CreateObject("Outlook.Application") If Err <> 0 Then MsgBox "Email error. Err = " & Err & " Description = " & Err.Description EmailValid = "N" Exit Function End If Set oEmailItem = oOutLookObject.CreateItem(0) If Err <> 0 Then MsgBox "Email error. Err = " & Err & " Description = " & Err.Description EmailValid = "N" Exit Function End If With oEmailItem .Recipients.Add (SMRecipients) .Subject = SMSubject .Importance = IMPORTANCENORMAL .Body = SMBody For i = 1 To 10 If RTrim(SMAttach(i)) <> "" Then .attachments.Add SMAttach(1) 'i) Else Exit For End If Next i .send End With If Err <> 0 Then MsgBox "Email error. Err = " & Err & " Description = " & Err.Description EmailValid = "N" Exit Function End If ''' .Attachments.Add ("c:\temp\test2.txt") Set oOutLookObject = Nothing I have set SMRecipients to a single email address and it is fine but when I add more addresses seperated by semicolons or spaces it only sends to the original address. My system runs under XP. Another point is that it use to find the addresses in the Outlook Address book and where they wetre not specific enough it would display the matching addresses for selection of the correct one. It no longer does this.

    Read the article

  • Do we ethically have the right to use the MAC Address for verification purposes?

    - by Matt Ridge
    I am writing a program, or starting at the very beginning of it, and I am thinking of purchase verification systems as a final step. I will be catering to Macs, PCs, and possibly Linux if all is said and done. I will also be programming this for smartphones as well using C++ and Objective-C. (I am writing a blueprint before going head first into it) That being said, I am not asking for help on doing it yet, but what I’m looking for is a realistic measurement for what could be expected as a viable and ethical option for purchase verification systems. Apple through the Apple Store, and some other stores out there have their own "You bought it" check. I am looking to use a three prong verification system. Email/password 16 to 32 character serial number using alpha/numeric and symbols with Upper and lowercase variants. MAC Address. The first two are in my mind ok, but I have to ask on an ethical standpoint, is a MAC Address to lock the software to said hardware unethical, or is it smart? I understand if an Ethernet card changes if not part of the logic board, or if the logic board changes so does the MAC address, so if that changes it will have to be re-verified, but I have to ask with how everything is today... Is it ethical to actually use the MAC address as a validation key or no? Should I be forward with this kind of verification system or should I keep it hidden as a secret? Yes I know hackers and others will find ways of knowing what I am doing, but in reality this is why I am asking. I know no verification is foolproof, but making it so that its harder to break is something I've always been interested in, and learning how to program is bringing up these questions, because I don't want to assume one thing and find out it's not really accepted in the programming world as a "you shouldn't do that" maneuver... Thanks in advance... I know this is my first programming question, but I am just learning how to program, and I am just making sure I'm not breaking some ethical programmer credo I shouldn't...

    Read the article

  • Why job postings always looking for "rockstars?" [closed]

    - by Xepoch
    I have noticed a recent trend in requesting programmers who are rockstars. I get it, they're looking for someone who is really good at what they do. But why (pray) make the reference to a rockstar? Do these companies really want these traits as a real rockstar? Party all night and wake up to take care of quick business in the morning? Substance abuse, Narcissism with celebrity, Compensation well exceeding their management, Excellent at putting on a short-lived show, Entertainment instead of value, 1 hit (project) wonders or single-genre performers, Et cetera What is wrong with Senior or Principal Software Engineer who has an established and proven passion for the business? Rather do we mean quite the opposite, someone who: rolls up the sleeves and gets to work, takes appropriate direction and helps influence teams, programs in lessons' learned and proper practices, provides timely communication to the whole team, can code and understand multiple languages, understands the science and theory behind computation, Is there a trend to diversify the software engineering ranks? How many software rockstars can you hire before your band starts breaking up? Sure, there are lots of folks doing this stuff on their own, maybe even a rare few who do coding for show, but I wager the majority is for business. I don't see ads for rockstar accountants, or rockstar machinists, or rockstart CFOs. What makes the software programmer and their hiring departments lean towards this kind of job title?

    Read the article

  • What is the canonical approach to using a VCS right from a project's infancy?

    - by Anonymous -
    Background I've used VCS (mainly git) in the past to manage many existing projects and it works great. Typically with an existing project, I would check in each change I make to the code that either optimizes or changes the overall functionality (you know what I mean, in suitable steps, not every single line I change). Problem One thing I've not had so much practise at is creating new projects. I'm in the process of starting a new project of my own that will probably grow quite large, but I'm finding that there is a lot to do and a lot changing in the first few days/hours/weeks/the period up until the product is actually functioning in it's most basic form. Is there any point in me checking in each step of the process as I would with an existing project? I'm not breaking the project with changes I make since it isn't working yet. At the moment I've simply been using VCS as a backup at the end of each day, when I leave the computer. My first few commits were things like "Basic directory structure in place" and "DB tables created". How should I use a VCS when starting a new project?

    Read the article

  • Introducing RedPatch

    - by timhill
    The Ksplice team is happy to announce the public availability of one of our git repositories, RedPatch. RedPatch contains the source for all of the changes Red Hat makes to their kernel, one commit per fix and we've published it on oss.oracle.com/git. With RedPatch, you can access the broken-out patches using git, browse them online via gitweb, and freely redistribute the source under the terms of the GPL. This is the same policy we provide for Oracle Linux and the Unbreakable Enterprise Kernel (UEK). Users can freely access the source, view the commit logs and easily identify the changes that are relevant to their environments. To understand why we've created this project we'll need a little history. In early 2011, Red Hat changed how they released their kernel source, going from a tarball that had individual patch files to shipping the kernel source as one giant tarball with a single patch for all Red Hat-introduced changes. For most people who work in the kernel this is merely an inconvenience; driver developers and other out-of-kernel module developers can see the end result to make sure their module still performs as expected. For Ksplice, we build individual updates for each change and rely on source patches that are broken-out, not a giant tarball. Otherwise, we wouldn’t be able to take the right patches to create individual updates for each fix, and to skip over the noise — like a change that speeds up bootup — which is unnecessary for an already-running system. We’ve been taking the monolithic Red Hat patch tarball and breaking it into smaller commits internally ever since they introduced this change. At Oracle, we feel everyone in the Linux community can benefit from the work we already do to get our jobs done, so now we’re sharing these broken-out patches publicly. In addition to RedPatch, the complete source code for Oracle Linux and the Oracle Unbreakable Enterprise Kernel (UEK) is available from both ULN and our public yum server, including all security errata. Check out RedPatch and subscribe to [email protected] for discussion about the project. Also, drop us a line and let us know how you're using RedPatch!

    Read the article

  • How to connect Ethernet via a Huawei Smart AXMT880s modem (with auto-detection)?

    - by user12285
    It's been well over six months but I have the same router SmartAX MT880d with Ethernet, and the exact same problem : no internet, even though I can successfully reach the modem settings page by entering 192.168.1.1 in Firefox. I'm a total beginner with Ubuntu. My internet works great in Windows but does not work in Ubuntu. Sorry if I don't use the right (technical) terminology to explain my issue. English is not my mother tongue. For 2 weeks, I've been doing reading on the web and forums and the ubuntuguide.org to name a few, but to no avail. Now I see no other solution but to ask for help. My problem is that I can't find a way to put the right digits in the right place because I don't know what numbers I need to put in what files. E.g.: do I need to use DHCP? or a static IP address? No clue whatsoever. I'm concerned that I might put figures in the wrong spaces. For example, is the modem/router's IP exactly 192.168.1.1 for Huawei Smart AXMT880d modem? Is the subnet 255.255.255.0? Gateway 192.168.1.1? I'm confused as I can also see a different IP starting with 155131*** (is it an account number?) on my contract with Huawei (a Chinese ISP). Apart from calling 911, what other numbers do I need to put in and where? How do I check that all the numbers have been entered correctly in every appropriate space before trying to connect the Internet?

    Read the article

  • Coping with build order requirements in automated builds

    - by Derecho
    I have three Scala packages being built as separate sbt projects in separate repos with a dependency graph like this: M---->D ^ ^ | | +--+--+ ^ | S S is a service. M is a set of message classes shared between S and another service. D is a DAL used by S and the other service, and some of its model appears in the shared messages. If I make a breaking change to all three, and push them up to my Git repo, a build of S will be kicked off in Jenkins. The build will only be successful if, when S is pushed, M and D have already been pushed. Otherwise, Jenkins will find it doesn't have the right dependent package versions available. Even pushing them simultaneously wouldn't be enough -- the dependencies would have to be built and published before the dependent job was even started. Making the jobs dependent in Jenkins isn't enough, because that would just cause the previous version to be built, resulting in an artifact that doesn't have the needed version. Is there a way to set things up so that I don't have to remember to push things in the right order? The only way I can see it working is if there was a way that a build could go into a pending state if its dependencies weren't available yet. I feel like there's a simple solution I'm missing. Surely people deal with this a lot?

    Read the article

  • Service Pack 1 for Telerik Extensions for ASP.NET MVC just released

    We just released the first service pack for the Q1 2010 release of Telerik Extensions for ASP.NET MVC. As you may have guessed this is mostly a maintenance release addressing all reported bugfixes. It is important to note that the service pack will be available only to licensed users. We will update the open source version only for major releases. However if a critical bug has been found we will publish builds in the forum so no worries.   Whats new Everything is described in the release notes. There are a few breaking changes in the TreeView and Grid. Check here to see if you are affected: Grid changes and backwards compatibility TreeView changes and backwards compatibility We have also tested the extensions with Visual Studio 2010 to confirm we fully support it. The source and samples will continue to ship in Visual Studio 2008 projects though. Opening ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Facebook contest policy no-no?

    - by Fred
    I would like to post a link on a Facebook page where it will exit Facebook entirely and go to a client's website, where people will be on a page (client's) where they can enter their e-mail address to be entered in a temporary database file with rules and disclosures etc., for a draw once the number of entries reaches 100 for instance. Once the number of entries reaches 100, a random winner is picked and notified via E-mail. The functionality is as follows: A link is place on a Facebook page leading to an external page The page is a form to merely enter their email address for a contest The email is placed in a temporary file An automatic E-mail is sent to the address used for confirmation using SHAH-256 hash The person receives the Email saying something to the affect "Please confirm your Email address etc. - If you did not authorize this, simply ignore this message and no further action will be taken". If the person clicks on the confirmation link, the Email is then stored in the database and the person is again notified saying "Thank you for signing up etc." Once others do the same process and the database reaches a certain number, the form is no longer accessible and automatically picks a random Email. Once picked, an Email is automatically sent to the winner stating the instructions, and notifying me also. Once that person clicks yet another confirmation link, the database is then automatically deleted. I have built this myself and have no intentions of breaking any rules, nor jeopardize the work/time/energy I have put into this project. Is this allowed?

    Read the article

  • &ldquo;Why do transactional messages all have the same priority?&rdquo;

    - by John Breakwell
    I answered this question on the MSMQ forum on MSDN and thought it worth sharing here. The poster wanted to know why all transactional messages have a fixed priority of zero (instead of 0 through 7). They wanted the guaranteed delivery of messages to a queue but wanted to assign different levels of priority. Some aspects of MSMQ were defined way back in the last century and this is one of them. If I remember right, the reason was to avoid the following scenario: You have a single transaction of 3 messages (a, b and c) with priorities 5, 3 and 4 respectively. The messages are sent in order a,b,c The messages arrive in the queue and are arranged in order a,c,b to reflect priority order This breaks the guaranteed order part of the transaction.  I know that very few people send more than one message in a transaction but that is a scenario that MSMQ has to be able to handle; for the majority, including yourself, this scenario is irrelevant which is why you are surprised by the absence of transactional priorities. The options, therefore, available to the Microsoft developers were to: Implement code that allowed you to send messages with variable priority as long as any messages within the same transaction were the same priority, or Define a set priority for all transactional messages As you can understand, option 1 would be a complicated arrangement with all the necessary enforcement, error handling, user education and documentation, etc. Sure, it would have been possible to implement option 1 but I expect the product group decided to invest the development time in some other aspect of MSMQ. Now, with five versions out there, it would be confusing to change how the product operates, in addition to potentially breaking exisiting systems that have been working fine for years. So, balancing cost and risk against customer demand, I would not expect this feature to ever change.

    Read the article

  • Managing TFS Workspaces

    - by Enrique Lima
    You are the administrator (or since you may be the one that knows the most about it) and you need to do some cleanup on what is connected and perhaps even cleanup after people that have left the organization and left some code checked out in their workspace. What permissions do I need? You will need to have Administer Workspaces permission to perform the following tasks. The commands. In order to execute the commands, you will need to open a Visual Studio Command Prompt, once there you will be able to use the tf command.  This has a nice set of options, which I will be providing a listing for later on in another post. To list all workspaces registered: tf workspaces /collection:<url to your TPC> <workspace>;<owner> To delete a specific workspace: tf workspace /delete /server:<url to your TPC> <workspace>;<owner> If for any reason a workspace has embedded spaces, then surround that with “” (double quotes).

    Read the article

  • Ubuntu 12.10 Quantal Quetzal and AMD 12.11 Beta Driver

    - by White
    I'm using a Quantal AMDx64 install and a XFX Radeon HD5850 video card. I first enabled restricted drivers through additional drivers, but it resulted in breaking Unity and Compiz (I can only see my wallpaper and shortcuts. But the terminal still works and Nautilus too, however, without Close/Maximize/Minimize and slower). Then I uninstalled it and everything went back to normal. Then I installed it via terminal (12.10 version), and the result was the same. Then I downloaded it via ATI's web site (12.11 beta) and installed the .run file using the terminal, but the result was yet again the same. Then I went to the terminal and entered these commands: sudo apt-get remove --purge fglrx fglrx_* fglrx-amdcccle* fglrx-dev* - It said it had nothing to uninstall sudo rm /ect/x11/xorg.conf - No such directory sudo dpkg-reconfigure xserver-xorg sudo startx sudo cp/ect/x11/xorg.conf.orig /ect/x11/xorg.conf - Also, no such directory sudo aticonfig --initial sudo reboot Then, I was presented with the log in screen, but when I tried to login (with my account), it flashed a black screen and then threw me back. Guest account still works (without unity and compiz, tough) and I can still use TTY. And I also got the "AMD Testing Only" watermark. Then I figured that I should stop messing with the terminal and get help before I unleashed Apocalypse XD. Side notes: My Ubuntu is installed on a ext4 partition with 60GB, and I dual boot with Windows 7 (at least for now). My internet is a 50kbps 3G-ish, so downloading even small files is a pain, let alone a video driver. I would rather not reinstall the O.S., it was a herculean task to download everything I had in there, and I have very little free disk space for backups. I'm still new to Ubuntu (I know some basic commands), and I don't know how to debug, so please, be patient XD And using Windows, my internet is even slower (is that possible?), so it kind of leaves a torture aftertaste xD. So, if you guys could answer quickly, it would be greatly appreciated. Thanks in advance. If you need any info, just ask (and explain how to get it XD).

    Read the article

  • Function for building an isosurface (a sphere cut by planes)

    - by GameDevEnthusiast
    I want to build an octree over a quarter of a sphere (for debugging and testing). The octree generator relies on the AIsosurface interface to compute the density and normal at any given point in space. For example, for a full sphere the corresponding code is: // returns <0 if the point is inside the solid virtual float GetDensity( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); Float3 v = Float3_Subtract( P, m_origin ); float l = Float3_LengthSquared( v ); float d = Float_Sqrt(l) - m_radius; return d; } // estimates the gradient at the given point virtual Float3 GetNormal( float _x, float _y, float _z ) const override { Float3 P = Float3_Set( _x, _y, _z ); float d = this->AIsosurface::GetDensity( P ); float Nx = this->GetDensity( _x + 0.001f, _y, _z ) - d; float Ny = this->GetDensity( _x, _y + 0.001f, _z ) - d; float Nz = this->GetDensity( _x, _y, _z + 0.001f ) - d; Float3 N = Float3_Normalized( Float3_Set( Nx, Ny, Nz ) ); return N; } What is a nice and fast way to compute those values when the shape is bounded by a low number of half-spaces?

    Read the article

  • What does the Sys_PageIn() function do in Quake?

    - by Philip
    I've noticed in the initialization process of the original Quake the following function is called. volatile int sys_checksum; // **lots of code** void Sys_PageIn(void *ptr, int size) { byte *x; int j,m,n; //touch all memory to make sure its there. The 16-page skip is to //keep Win 95 from thinking we're trying to page ourselves in (we are //doing that, of course, but there's no reason we shouldn't) x = (byte *)ptr; for (n=0 ; n<4 ; n++) { for (m=0; m<(size - 16 * 0x1000) ; m += 4) { sys_checksum += *(int *)&x[m]; sys_checksum += *(int *)&x[m + 16 * 0x10000]; } } } I think I'm just not familiar enough with paging to understand this function. the void* ptr passed to the function is a recently malloc()'d piece of memory that is size bytes big. This is the whole function - j is an unreferenced variable. My best guess is that the volatile int sys_checksum is forcing the system to physically read all of the space that was just malloc()'d, perhaps to ensure that these spaces exist in virtual memory? Is this right? And why would someone do this? Is it for some antiquated Win95 reason?

    Read the article

  • PSTN Trunk TDM400P Install on Asterisk / Trixbox

    - by Jona
    Hey All, I'm trying to get a TDM400P card with FXO module to connect to our PSTN line. The card is correctly detected by Linux: [trixbox1.localdomain asterisk]# lspci 00:09.0 Communication controller: Tiger Jet Network Inc. Tiger3XX Modem/ISDN interface I've run setup-pstn which produces the following output trixbox1.localdomain ~]# setup-pstn -------------------------------------------------------------- Detecting PSTN cards and USB PSTN Devices -------------------------------------------------------------- Hardware present! STOPPING ASTERISK Asterisk Stopped STOPPING FOP SERVER FOP Server Stopped Unloading DAHDI hardware modules: done Loading DAHDI hardware modules: wct4xxp: [ OK ] wcte12xp: [ OK ] wct1xxp: [ OK ] wcte11xp: [ OK ] wctdm24xxp: [ OK ] opvxa1200: [ OK ] wcfxo: [ OK ] wctdm: [ OK ] wcb4xxp: [ OK ] wctc4xxp: [ OK ] xpp_usb: [ OK ] Running dahdi_cfg: [ OK ] SETTING FILE PERMISSIONS Permissions OK STARTING ASTERISK Asterisk Started STARTING FOP SERVER FOP Server Started Chan Extension Context Language MOH Interpret Blocked State pseudo default en default In Service 1 from-pstn en default In Service dahdi_scan returns: dahdi_scan [1] active=yes alarms=OK description=Wildcard TDM400P REV I Board 5 name=WCTDM/4 manufacturer=Digium devicetype=Wildcard TDM400P REV I location=PCI Bus 00 Slot 10 basechan=1 totchans=4 irq=209 type=analog port=1,FXO port=2,none port=3,none port=4,none And asterisk can see the channel: > trixbox1*CLI> dahdi show channel 1 > Channel: 1LI> File Descriptor: 14 > Span: 11*CLI> Extension: I> Dialing: > noI> Context: from-pstn Caller ID: I> > Calling TON: 0 Caller ID name: > Mailbox: none Destroy: 0LI> InAlarm: > 1LI> Signalling Type: FXS Kewlstart > Radio: 0*CLI> Owner: <None> Real: > <None>> Callwait: <None> Threeway: > <None> Confno: -1LI> Propagated > Conference: -1 Real in conference: 0 > DSP: no1*CLI> Busy Detection: no TDD: > no1*CLI> Relax DTMF: no > Dialing/CallwaitCAS: 0/0 Default law: > ulaw Fax Handled: no Pulse phone: no > DND: no1*CLI> Echo Cancellation: > trixbox1128 taps trixbox1(unless TDM > bridged) currently OFF Actual > Confinfo: Num/0, Mode/0x0000 Actual > Confmute: No > Hookstate (FXS only): Onhook A cat of /etc/asterisk/dahdi.conf shows: [trixbox1.localdomain ~]# cat /etc/asterisk/dahdi-channels.conf ; Autogenerated by /usr/sbin/dahdi_genconf on Tue May 25 17:45:13 2010 ; If you edit this file and execute /usr/sbin/dahdi_genconf again, ; your manual changes will be LOST. ; Dahdi Channels Configurations (chan_dahdi.conf) ; ; This is not intended to be a complete chan_dahdi.conf. Rather, it is intended ; to be #include-d by /etc/chan_dahdi.conf that will include the global settings ; ; Span 1: WCTDM/4 "Wildcard TDM400P REV I Board 5" (MASTER) ;;; line="1 WCTDM/4/0 FXSKS (SWEC: MG2)" signalling=fxs_ks callerid=asreceived group=0 context=from-pstn channel => 1 callerid= group= context=default I have configured a "ZAP Trunk (DAHDI compatibility Mode)" with the ZAP identifier 1 and an outbound route, but when ever I try to make an external call via it I get the "All Circuits are busy now, please try your call again later message". I have one outbound route which uses the dial pattern 9|. and the Trunk Zap/1 and one Zap Trunk which uses Zap Identifier (trunk name): 1 and has no Dial Rules. The FXO module is directly connected to our phone line from BT via a BT-RJ11 cable. When running tail -f /var/log/asterisk/full and placing a call I get the following output: [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP TOS bits 184 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP RTP CoS mark 5 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP TOS bits 136 [May 26 11:10:52] VERBOSE[2723] logger.c: == Using SIP VRTP CoS mark 6 [May 26 11:10:52] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:1] Macro("SIP/801-b7ce8c28", "user-callerid,SKIPTTL,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:1] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:2] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:3] ExecIf("SIP/801-b7ce8c28", "1?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:4] Set("SIP/801-b7ce8c28", "AMPUSER=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:5] Set("SIP/801-b7ce8c28", "AMPUSERCIDNAME=Jona") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:6] GotoIf("SIP/801-b7ce8c28", "0?report") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:7] Set("SIP/801-b7ce8c28", "AMPUSERCID=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:8] Set("SIP/801-b7ce8c28", "CALLERID(all)="Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:9] Set("SIP/801-b7ce8c28", "REALCALLERIDNUM=801") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:10] ExecIf("SIP/801-b7ce8c28", "0?Set(CHANNEL(language)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:11] GotoIf("SIP/801-b7ce8c28", "1?continue") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-user-callerid,s,20) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-user-callerid:20] NoOp("SIP/801-b7ce8c28", "Using CallerID "Jona" <801>") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:2] Set("SIP/801-b7ce8c28", "_NODEST=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:3] Macro("SIP/801-b7ce8c28", "record-enable,801,OUT,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:1] GotoIf("SIP/801-b7ce8c28", "1?check") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-record-enable,s,4) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:4] AGI("SIP/801-b7ce8c28", "recordingcheck,20100526-111052,1274868652.1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Launched AGI Script /var/lib/asterisk/agi-bin/recordingcheck [May 26 11:10:52] VERBOSE[2858] logger.c: recordingcheck,20100526-111052,1274868652.1: Outbound recording not enabled [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28>AGI Script recordingcheck completed, returning 0 [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-record-enable:5] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:4] Macro("SIP/801-b7ce8c28", "dialout-trunk,1,01483890915,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:1] Set("SIP/801-b7ce8c28", "DIAL_TRUNK=1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:2] GosubIf("SIP/801-b7ce8c28", "0?sub-pincheck,s,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:3] GotoIf("SIP/801-b7ce8c28", "0?disabletrunk,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:4] Set("SIP/801-b7ce8c28", "DIAL_NUMBER=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:5] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=tr") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:6] Set("SIP/801-b7ce8c28", "OUTBOUND_GROUP=OUT_1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:7] GotoIf("SIP/801-b7ce8c28", "1?nomax") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s,9) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:9] GotoIf("SIP/801-b7ce8c28", "0?skipoutcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:10] Set("SIP/801-b7ce8c28", "DIAL_TRUNK_OPTIONS=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:11] Macro("SIP/801-b7ce8c28", "outbound-callerid,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:1] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:2] ExecIf("SIP/801-b7ce8c28", "0?Set(REALCALLERIDNUM=801)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:3] GotoIf("SIP/801-b7ce8c28", "1?normcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,6) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:6] Set("SIP/801-b7ce8c28", "USEROUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:7] Set("SIP/801-b7ce8c28", "EMERGENCYCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:8] Set("SIP/801-b7ce8c28", "TRUNKOUTCID=") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:9] GotoIf("SIP/801-b7ce8c28", "1?trunkcid") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-outbound-callerid,s,12) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:12] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:13] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERID(all)=)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outbound-callerid:14] ExecIf("SIP/801-b7ce8c28", "0?Set(CALLERPRES()=prohib_passed_screen)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:12] ExecIf("SIP/801-b7ce8c28", "0?AGI(fixlocalprefix)") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:13] Set("SIP/801-b7ce8c28", "OUTNUM=01483890915") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:14] Set("SIP/801-b7ce8c28", "custom=DAHDI/1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:15] ExecIf("SIP/801-b7ce8c28", "0?Set(DIAL_TRUNK_OPTIONS=M(setmusic^))") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:16] Macro("SIP/801-b7ce8c28", "dialout-trunk-predial-hook,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk-predial-hook:1] MacroExit("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:17] GotoIf("SIP/801-b7ce8c28", "0?bypass,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:18] GotoIf("SIP/801-b7ce8c28", "0?customtrunk") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:19] Dial("SIP/801-b7ce8c28", "DAHDI/1/01483890915,300,") in new stack [May 26 11:10:52] WARNING[2858] app_dial.c: Unable to create channel of type 'DAHDI' (cause 0 - Unknown) [May 26 11:10:52] VERBOSE[2858] logger.c: == Everyone is busy/congested at this time (1:0/0/1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-dialout-trunk:20] Goto("SIP/801-b7ce8c28", "s-CHANUNAVAIL,1") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,1) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:1] GotoIf("SIP/801-b7ce8c28", "1?noreport") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Goto (macro-dialout-trunk,s-CHANUNAVAIL,3) [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s-CHANUNAVAIL@macro-dialout-trunk:3] NoOp("SIP/801-b7ce8c28", "TRUNK Dial failed due to CHANUNAVAIL (hangupcause: 0) - failing through to other trunks") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [901483890915@from-internal:5] Macro("SIP/801-b7ce8c28", "outisbusy,") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:1] Playback("SIP/801-b7ce8c28", "all-circuits-busy-now,noanswer") in new stack [May 26 11:10:52] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'all-circuits-busy-now.ulaw' (language 'en') [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-outisbusy:2] Playback("SIP/801-b7ce8c28", "pls-try-call-later,noanswer") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- <SIP/801-b7ce8c28> Playing 'pls-try-call-later.ulaw' (language 'en') [May 26 11:10:54] WARNING[2661] pbx.c: FONALITY: This thread has already held the conlock, skip locking [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (macro-outisbusy, s, 2) exited non-zero on 'SIP/801-b7ce8c28' in macro 'outisbusy' [May 26 11:10:54] VERBOSE[2858] logger.c: == Spawn extension (from-internal, 901483890915, 5) exited non-zero on 'SIP/801-b7ce8c28' [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [h@from-internal:1] Macro("SIP/801-b7ce8c28", "hangupcall") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:1] ResetCDR("SIP/801-b7ce8c28", "vw") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:2] NoCDR("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:3] GotoIf("SIP/801-b7ce8c28", "1?skiprg") in new stack [May 26 11:10:54] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,6) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:6] GotoIf("SIP/801-b7ce8c28", "1?skipblkvm") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,9) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:9] GotoIf("SIP/801-b7ce8c28", "1?theend") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: -- Goto (macro-hangupcall,s,11) [May 26 11:10:55] VERBOSE[2858] logger.c: -- Executing [s@macro-hangupcall:11] Hangup("SIP/801-b7ce8c28", "") in new stack [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (macro-hangupcall, s, 11) exited non-zero on 'SIP/801-b7ce8c28' in macro 'hangupcall' [May 26 11:10:55] VERBOSE[2858] logger.c: == Spawn extension (from-internal, h, 1) exited non-zero on 'SIP/801-b7ce8c28' I'm guessing I've missed a configuration step somewhere but no idea where, any help greatly appreciated.

    Read the article

  • Pigs in Socks?

    - by MightyZot
    My wonderful wife Annie surprised me with a cruise to Cozumel for my fortieth birthday. I love to travel. Every trip is ripe with adventure, crazy things to see and experience. For example, on the way to Mobile Alabama to catch our boat, some dude hauling a mobile home lost a window and we drove through a cloud of busting glass going 80 miles per hour! The night before the cruise, we stayed in the Malaga Inn and I crawled UNDER the hotel to look at an old civil war bunker. WOAH! Then, on the way to and from Cozumel, the boat plowed through two beautiful and slightly violent storms. But, the adventures you have while travelling often pale in comparison to the cult of personalities you meet along the way.  :) We met many cool people during our travels and we made some new friends. Todd and Andrea are in the publishing business (www.myneworleans.com) and teaching, respectively. Erika is a teacher too and Matt has a pig on his foot. This story is about the pig. Without that pig on Matt’s foot, we probably would have hit a buoy and drowned. Alright, so…this pig on Matt’s foot…this is no henna tatt, this is a man’s tattoo. Apparently, getting tattoos on your feet is very painful because there is very little muscle and fat and lots of nifty nerves to tell you that you might be doing something stupid. Pig and rooster tattoos carry special meaning for sailors of old. According to some sources, having a tattoo of a pig or rooster on one foot or the other will keep you from drowning. There are many great musings as to why a pig and a rooster might save your life. The most plausible in my opinion is that pigs and roosters were common livestock tagging along with the crew. Since they were shipped in wooden crates, pigs and roosters were often counted amongst the survivors when ships succumbed to Davy Jones’ Locker. I didn’t spend a whole lot of time researching the pig and the rooster, so consider these musings as you would a grain of salt. And, I was not able to find a lot of what you might consider credible history regarding the tradition. What I did find was a comfort, or solace, in the maritime tradition. Seems like raw traditions like the pig and the rooster are in danger of getting lost in a sea of non-permanence. I mean, what traditions are us old programmers and techies leaving behind for future generations? Makes me wonder what Ward Christensen has tattooed on his left foot.  I guess my choice would have to be a Commodore 64.   (I met Ward, by the way, in an elevator after he received his Dvorak awards in 1992. He was a very non-assuming individual sporting business casual and was very much a “sailor” of an old-school programmer. I can’t remember his exact words, but I think they were essentially that he felt it odd that he was getting an award for just doing his work. I’m sure that Ward doesn’t know this…he couldn’t have set a more positive example for a young 22 year old programmer. Thanks Ward!)

    Read the article

  • cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that contains my working 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. When installation completes and the system boots the 3TB drive, it reports "unknown filesystem". After installation on the 250GB drives, the system boots up fine. During every install I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I never bothered setting the BIOS to "non-EFI". I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB /boot ext4 8GB swap 3TB / ext4 But I've also tried the following, just in case it matters: 8GB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue> Note: I have two DVD burners in the system, hence the duplicate line above. Note: I install and boot 64-bit ubuntu v12.04 on both of my 250GB in this same system, but still cannot make the 3TB drive boot. Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04 FAILS) HDD == seagate 1TB SATA3 @ 7200 rpm (64-bit v10.04 WORKS for two years) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) 64-bit ubuntu v10.04 has booted and run fine on the seagate 1TB drive for 2 years.

    Read the article

  • Framework 4 Features: Support for Timed Jobs

    - by Anthony Shorten
    One of the new features of the Oracle Utilities Application Framework V4 is the ability for the batch framework to support Timed Batch. Traditionally batch is associated with set processing in the background in a fixed time frame. For example, billing customers. Over the last few versions their has been functionality required by the products required a more monitoring style batch process. The monitor is a batch process that looks for specific business events based upon record status or other pieces of data. For example, the framework contains a fact monitor (F1-FCTRN) that can be configured to look for specific status's or other conditions. The batch process then uses the instructions on the object to determine what to do. To support monitor style processing, you need to run the process regularly a number of times a day (for example, every ten minutes). Traditional batch could support this but it was not as optimal as expected (if you are a site using the old Workflow subsystem, you understand what I mean). The Batch framework was extended to add additional facilities to support times (and continuous batch which is another new feature for another blog entry). The new facilities include: The batch control now defines the job as Timed or Not Timed. Non-Timed batch are traditional batch jobs. The timer interval (the interval between executions) can be specified The timer can be made active or inactive. Only active timers are executed. Setting the Timer Active to inactive will stop the job at the next time interval. Setting the Timer Active to Active will start the execution of the timed job. You can specify the credentials, language to view the messages and an email address to send the a summary of the execution to. The email address is optional and requires an email server to be specified in the relevant feature configuration. You can specify the thread limits and commit intervals to be sued for the multiple executions. Once a timer job is defined it will be executed automatically by the Business Application Server process if the DEFAULT threadpool is active. This threadpool can be started using the online batch daemon (for non-production) or externally using the threadpoolworker utility. At that time any batch process with the Timer Active set to Active and Batch Control Type of Timed will begin executing. As Timed jobs are executed automatically then they do not appear in any external schedule or are managed by an external scheduler (except via the DEFAULT threadpool itself of course). Now, if the job has no work to do as the timer interval is being reached then that instance of the job is stopped and the next instance started at the timer interval. If there is still work to complete when the interval interval is reached, the instance will continue processing till the work is complete, then the instance will be stopped and the next instance scheduled for the next timer interval. One of the key ways of optimizing this processing is to set the timer interval correctly for the expected workload. This is an interesting new feature of the batch framework and we anticipate it will come in handy for specific business situations with the monitor processes.

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to Browse Without a Trace with an Ubuntu Live CD

    - by Trevor Bekolay
    No matter how diligently you clear your cache and erase your history, web browsing leaves traces on your computer. If you need keep your browsing private, then an Ubuntu Live CD is the answer. The key to this trick is that the Live CD environment runs completely in RAM, so things like your cache, cookies, and history don’t get saved to a persistent storage location. On a hard drive, even deleted files can be recovered, but once a computer is turned off the data stored in RAM is unrecoverable. In addition, since the Ubuntu Live CD environment is the same no matter what computer you use it on, there’s very little identifying information that a website can use to track you! The first step is to either burn an Ubuntu Live CD, or prepare a non-persistent Ubuntu USB flash drive. Ubuntu treats non-persistent flash drives like CDs, so files will not be written to it, but if you’re paranoid, then using a physical CD ensures that nothing gets written to a storage device. Boot up from the CD or flash drive, and choose to Run Ubuntu from the CD or flash drive if prompted (for more detailed instructions on booting from a CD or USB drive, see this article, or our guide on booting from a flash drive even if your BIOS won’t let you). Once the graphical Ubuntu environment comes up, you can click on the Firefox icon at the top of the screen to start browsing. If your browsing requires Flash, then you can install it by clicking on System at the top-left of the screen, then Administration > Synaptic Package Manager. Click on Settings at the top of the Synaptic window, and then select Repositories. Add a check in the checkbox with the label ending in “multiverse”. Click Close. Click the Reload button in the main Synaptic window. The list of available packages will reload. When they’ve reloaded, type “restricted” in the Quick search box. Right-click on ubuntu-restricted-extras and select Mark for Installation. It will note a number of other packages that will be installed. This list includes audio and video codecs, so after installing these, you should be able to play downloaded movies and songs. Click Mark to accept the installation of these other packages. Once you return to the main Synaptic window, click the Apply button and go through the dialogs to finish the installation of Flash and the other useful packages. If you open up Firefox now, you’ll have no problems using websites that use Flash. When you’re done browsing and shut down or restart your computer, all traces of your web browsing will be gone. It’s a bit of work compared to just using a privacy-centric browser, but if it’s very important that your browsing leave no traces on your hard drive, an Ubuntu Live CD is your best bet. Download Ubuntu Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDAdding extra Repositories on UbuntuHow to Add a Program to the Ubuntu Startup List (After Login)How to install Spotify in Ubuntu 9.10 using WineInstalling PHP4 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

< Previous Page | 192 193 194 195 196 197 198 199 200 201 202 203  | Next Page >