Search Results

Search found 122511 results on 4901 pages for 'need help'.

Page 690/4901 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • Attention NYC Area Marketers: Don't Miss This Executive Breakfast on Brand Building in the Digital Era

    - by Christie Flanagan
    Presenting and Managing Digital Content – A New Approach Reach Your Audiences Where They Are with Multi-Channel Marketing Attention marketers in the greater New York City area! Oracle Platinum Partner, Bluenog, invites you to an executive breakfast seminar on brand building in the digital era. In an age where consumers are spending increasing amounts of their time online, interacting, communicating and being influenced by other brands, you too must go online with a coordinated plan. And, given the hundreds, if not thousands, of places that might be relevant, having the right content and the right tools are critical. This two-part presentation will focus on the growing need for content and connection in building and maintaining your brand, as well as the role of technology in helping you maintain brand consistency, reach and interaction while simplifying delivery to web, tablet, mobile, and social audiences. Location Oracle Offices 520 Madison Ave, 30th Floor New York, NY  10022 Day/Time Thursday, May 3, 2012 9:30 AM to 11:30 AM About the Speakers Agenda: Michelle Pujadas, is an award-winning marketer and communicator, who has worked with more than 125 companies to help them package, launch and expand their brand presence, online and off. Michelle is the Founder and co-CEO of Zer0 to 5ive, a strategic marketing and communications firm that focuses on B2B and B2C technology companies, with offices in NY, Philadelphia and Chicago. Peter Conrad, the E 2.0 Practice Director for Bluenog, focuses on translating exciting visions for user experiences into well executed technical implementations leveraging advanced WebCenter technology from Oracle. Bluenog provides the systems and professional services today's forward-looking marketing organizations need to convert content, business capabilities, and communications into productive interactions with customers and prospects. 09:30am Arrival, Registration & Breakfast 10:00am Brand Building through Content and Connection, presented by Michelle Pujadas, Founder and co-CEO of Zer0 to 5ive 10:30am Leveraging Technology for Brand Reach, Consistency and Interaction, presented by Peter Conrad, E2.0 Practice Director at Bluenog 11:15am Q&A 11:30am Adjourn

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system. The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me. At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach. Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements. My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work. Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way. DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal. Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info: You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client. You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features. I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically. When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions: On the Choose the installation you want page of the installation wizard, I chose Server Farm. On the Server Type page, I chose Complete. At the end of the installation, I did not run the configuration wizard. Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end. It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective. I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed: Install SQL Server 2008 R2 to get a database engine instance installed. Run the SharePoint configuration wizard to set up the SharePoint databases. In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization. Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment. I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon. Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment. I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • WebLogic 12c hands-on bootcamps for partners – free of charge

    - by swalker
    For all WebLogic Partner Community members we offer our WebLogic 12c hands-on Bootcamps – free of charge! If you want to become an Application Grid Specialized: Register Here! Country Date Location Registration Germany 3-5 April 2012 Oracle Düsseldorf Click here France 24-26 April 2012 Oracle Colombes Click here Spain 08-10 May 2012 Oracle Madrid Click here Netherlands 22-24 May 2012 Oracle Amsterdam Click here United Kingdom 06-08 June 2012 Oracle Reading Click here Italy 19-21 June 2012 Oracle Cinisello Balsamo Click here Portugal 10-12 July 2012 Oracle Lisbon Click here Skill requirements Attendees need to have the following skills as this is required by the product-set and to make sure they get the most out of the training: Basic knowledge in Java and JavaEE Understanding the Application Server concept Basic knowledge in older releases of WebLogic Server would be beneficial Member of WebLogic Partner Community for registration please vist http://www.oracle.com/partners/goto/wls-emea Hardware requirements Every participant works on his own notebook. The minimal hardware requirements are: 4Gb physical RAM (we will boot the image with 2Gb RAM) dual core CPU 15 GB HDD Software requirements Please install Oracle VM VirtualBox 4.1.8 Follow-up and certification With the workshop registration you agree to the following next steps Follow-up training attend and pass the Oracle Application Grid Certified Implementation Specialist Registration For details and registration please visit Register Here Free WebLogic Certification (Free assessment voucher to become certified) For all WebLogic experts, we offer free vouchers worth $195 for the Oracle Application Grid Certified Implementation Specialist assessment. To demonstrate your WebLogic knowledge you first have to pass the free online assessment Oracle Application Grid PreSales Specialist. For free vouchers, please send an e-mail with the screenshot of your Oracle Application Grid PreSales certirficate to [email protected] including your Name, Company, E-mail and Country. Note: This offer is limited to partners from Europe Middle East and Africa. Partners from other countries please contact your Oracle partner manager. WebLogic Specialization To become specialized in Application Grid, please make sure that you access the: Application Grid Specialization Guide Application Grid Specialization Checklist If you have any questions please contact the Oracle Partner Business Center. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea (OPN account required). If you need support with your account please contact the Oracle Partner Business Center.

    Read the article

  • Don't Miss What Procurement Experts Are Talking About. Join the Webcasts starting next week!

    - by LuciaC
    The Procurement team have three Advisor Webcasts scheduled in December with information about new features, tips and tricks and troubleshooting guidance. New Features and enhancements Incorporated in the Procurement Rollup Patch 14254641:R12.PRC_PF.B December 4, 2012 at 14:00 London / 16:00 Egypt / 06:00 am Pacific / 7:00 am Mountain / 9:00 am EasternThis session is recommended for technical and functional users who need to know about the new features and enhancements incorporated in the Procurement Rollup Patch. Topics will include: GCPA Enable All Sites E-Mail PO - .LANGUAGE Read Only BWC Validate Document GBPA OSP Items GL Date Defaulting Cancel Refactoring Action History Cleanup Click here to register for this event. Approval Management Engine (AME) New Features, Setup and Use for Purchase Orders December 6, 2012 at 14:00 London / 16:00 Egypt / 06:00 am Pacific / 7:00 am Mountain / 9:00 am EasternThis is recommended for Functional Users and Application Technical Users who work in the Procurement Module including Purchasing and iProcurement and would like to know more about how to set up and use the Approval Management Engine (AME) for purchase orders.Topics will include: Scope and limitations of AME functionality for purchase orders Setup and use of AME for purchase orders PO Review and PO E-Sign new features Demonstration: Example of scenarios using the new features Click here to register for this event. How to Solve Approval Errors in Procurement December 18, 2012 at 4:00 pm Egypt / 2:00 pm London / 6:00 am Pacific / 7:00 am Mountain / 9:00 am EasternThis session is recommended for technical and functional users who need to know about how to diagnose and troubleshoot common Approval Errors.Topics will include: Basic mandatory setups for approvals of PO documents Differences between Purchase Order Approval and Requisition Approval Process. Troubleshooting of Approval Errors. Basic Setup of AME which can be used in Requisition Approval Process. Click here to register for this event. You can see a listing of all scheduled and archived webcasts from Doc ID 740966.1.  Select the product you are interested in (such as E-Business Suite Procurement) and this will take you to the webcast listing for the product.

    Read the article

  • Can't adjust backlight on an Nvidia 335m GT

    - by Vladimir
    I have a laptop mySN QMG6 / Chiligreen Mobilitas NW which is Quanta TW9 barebone with intel i3 and nvidia 335m GT onboard. On ubuntu distros 10.04, 10.10, 11.04 and 11.10 i had problem with changing screen backlight with nouveau and nvidia drivers. FN+F4/F5 buttons did not change my brightness. I tried to edit xorg.conf, adding Option “RegistryDwords” “EnableBrightnessControl=1? Also tried to add some lines to grub acpi_osi="Linux" acpi_backlight=vendor Neither worked for me. Today I installed Ubuntu 12.04 beta2 and... With nouveau driver my FN key works, and changes the brightness (is it a new 3.0.22 linux kernel, or patched nouveau driver, i don't know). This is a big step forward. But, when installing proprietary nvidia driver (295.33) FN button stops working and i can't change brightness. I also tried workaround with xorg and grub with no result. Tried to install acpi from apt - no result. Is there anything left to try? I really need that nvidia driver working with FN keys, as i would like to have a working 3D acceleration. P.S. Does the nouveau driver has 3d acceleration like nvidia drivers??? If there is need to provide some log data, please write what should i print, as i'm a bit new to Ubuntu. P.P.S. Same problems i had with other Linux distros (Mint, Fedora and others) P.P.P.S. Other FN buttons work with both drivers (Mute, VOL UP/DOWN, WiFi on/off, Bluetooth, Sleep, Start/Pause, Stop, Next/Prev song) Some new thoughts... CONFIG_BACKLIGHT_GENERIC=m could this be an issue? Made this by grep BACKLIGHT /boot/config-3.2.0-22-generic-pae Full grep output can be viewed here: http://pastebin.com/sMRd2Z4k

    Read the article

  • Connecting MS SQL using freetds and unixodbc: isql - no default driver specified

    - by Dejan
    I am trying to connect to the MS SQL database using freetds and unixodbc. I have read various guides how to do it, but no one works fine for me. When I try to connect to the database using isql tool, I get the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect Have anybody already successfully established the connection to the MS SQL database using freetds and unixodbc on Ubuntu 12.04? I would really appreciate some help. Below is the procedure I used to configure the freetds and unixodbc. Thanks for your help in advance! Procedure First, I have installed the following packages sudo apt-get unixodbc unixodbc-dev freetds-dev tdsodbc and configured freetds as follows: --- /etc/freetds/freetds.conf --- [TS] host = SERVER port = 1433 tds version = 7.0 client charset = UTF-8 Using tsql tool I can successfully connect to the database by executing tsql -S TS -U username -P password As I need an odbc connection I configured odbcinst.ini as follows: --- /etc/odbcinst.ini --- [FreeTDS] Description = FreeTDS Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so FileUsage = 1 CPTimeout = CPResuse = client charset = utf-8 and odbc.ini as follows: --- /etc/odbc.ini --- [TS] Description = "test" Driver = FreeTDS Servername = SERVER Server = SERVER Port = 1433 Database = DBNAME Trace = No Trying to connect to the database using isql tool with such a configuration results the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect

    Read the article

  • Ubuntu: The Movie

    - by CYREX
    Since Ubuntu is the most popular distribution and has made a lot of changes in many places around the globe and in different industries up to the point where even people that do not know what Linux is, they know what Ubuntu is (go figure? ) there might be a movie coming someday (like the social network for Facebook or Revolution OS for Linux/Red Hat) i wanted to know how it all came to be from the actual players in the show. UBUNTU: The Movie Since i have seen several of the primary characters of the movie here, this might be a good place to start on how it all came to be. Not in the traditional wikipedia way or the ubuntu help section, but in the what the actual developers have in mind on how it all went down to the point of having a huge amount of users, an incredible level sophistication in the forum, help sections, installers, etc.. This is just to have the KNOW HOW before the actual movie makes it out some day in the future. As a fan of Ubuntu this is a MOST KNOW! ;) Hope i made some people happy and some other shy hehe.

    Read the article

  • Hudson.. another Continuous Integration tool

    - by Narendra Tiwari
    In my previous posts I discussed about Cruisecontrol.net and its legacy support to .Net development. Hudson  is yet another continuous integration tool. Hudson is also free like CCNet and built in java. - CCNet has its legacy support to .Net applications where as Hudson can be easily configured on both the environments (.Net and Java). - One of the major differences in CCNet and Hudson is the richer GUI of Hudson provide user interactive screens for project configuration where as in CCNet we have to play with a few xml configuration files. Both the tools are capable of providing basic features of continuous integration e.g.:- - Source Control configuration - Code Compilation/Build - Ad hoc plugin tools to be configured along with compilation Support for adhoc tools seems to be bigger with CCNet e.g. There are almost every source control plugin available with CCNet where as Hudson has support for limited source control servers. Basically there is an interseting point to see is that there are 2 major partsof whole CI system one performed by build tool and rest. Build tool takes care of all adhoc plugin tools  so no matter if CI tool does not have plugin for that tool if thet tools provides command line support that can be configured in build tool and that build tool is then configured with CI tool inturn. For example if I have a build script configured in MSBuild and CCNet can be easily switched to Hudson. Here we need not to change anything in build script we just need to configure MSBuild on Hudson and pass the path of script file and thats it... all is same. Hudson Resources:- - https://hudson.dev.java.net/ - http://wiki.hudson-ci.org/display/HUDSON/Meet+Hudson - http://wiki.hudson-ci.org/display/HUDSON/Plugins - http://callport.blogspot.com/2009/02/hudson-for-net-projects.html Java support on CCNet http://confluence.public.thoughtworks.org/display/CC/Getting+Started+With+CruiseControl?focusedCommentId=19988484#comment-19988484 Please share your thoughts...

    Read the article

  • How to Upload Really Large Files to SkyDrive, Dropbox, or Email

    - by Matthew Guay
    Do you need to upload a very large file to store online or email to a friend? Unfortunately, whether you’re emailing a file or using online storage sites like SkyDrive, there’s a limit on the size of files you can use. Here’s how to get around the limits. Skydrive only lets you add files up to 50 MB, and while the Dropbox desktop client lets you add really large files, the web interface has a 300 MB limit, so if you were on another PC and wanted to add giant files to your Dropbox, you’d need to split them. This same technique also works for any file sharing service—even if you were sending files through email. There’s two ways that you can get around the limits—first, by just compressing the files if you’re close to the limit, but the second and more interesting way is to split up the files into smaller chunks. Keep reading for how to do both. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • How to Boot a VMware Virtual Machine from a USB Drive

    - by Usman
    Do you have an OS installed on your USB thumb drive? Booting from it in a VM is now possible, you’ll just have to use a simple trick to get it to work. Last week we showed you how to put Ubuntu on a USB drive in a separate partition, and we also discussed working with VMware Player (our favourite VM Client). But have you ever tried booting from a USB drive in VMWare? It doesn’t allow doing so, but we will force it to boot from a USB, with a bit of old geekery. If you remember, we have showed you how to boot from a USB drive even if your old PC doesn’t allow booting from one. That’s right, using Plop Boot Manager. All we need to do is to load the Plop ISO in VMware, attach and enable the USB drive in VMware, and finally select the USB option in Plop Boot Manager to boot from the USB. So, visit the Plop boot manager download site. HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • Do you think that exposure to BASIC can mutilate your mind? [closed]

    - by bigown
    It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration -- Edsger W. Dijkstra I have deep respect to Dijkstra but I don't agree with everything he said/wrote. I disagree specially with this quote on linked paper wrote 35 years ago about the Dartmouth BASIC implementation. Many of my coworkers or friends programmers started with BASIC, questions below have answers that indicate many programmers had their first experience on programming at BASIC. AFAIK many good programmers started at BASIC programming. I'm not talking about Visual Basic or other "modern" dialects of BASIC running on machines full of resources. I'm talking about old times BASIC running on "toy" computer, that the programmer had to worry about saving small numbers that need not be calculated as a string to save a measly byte because the computer had only a few hundreds of them, or have to use computed goto for lack of a more powerful feature, and many other things which require the programmer to think much before doing something and forcing the programmer to be creative. If you had experience with old time BASIC on a machine with limited resources (have in mind that a simple micro-controller today has much more resources than a computer in 1975, do you think that BASIC help your mind to find better solutions, to think like an engineer or BASIC drag you to dark side of programming and mutilated you mentally? Is good to learn a programming language running on a computer full of resources where the novice programmer can do all wrong and the program runs without big problems? Or is it better to learn where the programmer can't go wrong? What can you say about the BASIC have helped you to be a better/worse programmer? Would you teach old BASIC running on a 2KB (virtual) machine to a coming programmer? Sure, only exposure to BASIC is bad. Maybe you share my opinion that modern BASIC doesn't help too much because modern BASIC, as long other programming languages, gives facilities which allow the programmer doesn't think deeper. Additional information: Why BASIC?

    Read the article

  • Ricoh Aficio 1515ps - How can I get the scanner to scan? (printing works)

    - by nutty about natty
    For similar screenshots and story, see: How to define my Samsung SCX3200 multifunction printer? and for a possible solution: How can I get an Epson TX560WD scanner working? Thanks! Edit n°1 installed xsane (via Ubuntu Software Center), launched it and get the following: Clicking on "Help" yields: I tried 4) man sane-dll which yields No manual entry for sane-dll I uninstalled "Simple Scan": didn't help. I tried 3) but aborted due to unsettling warning ;) wasn't that brave (or desperate). Maybe it's due to 1) and I'm stuck with it? Is it really not possible that a driver for either Windoze or Mac OS would also support scanning via Ubuntu? Edit n°2 Resorted to Windows XP and the Network TWAIN Driver ScanRouter or the "lite" version (which I found elsewhere) might also work (under Linux??); but it's more than I need. Thought about WINE, but also that seems to rely on SANE... So still no luck with this device's scanning capabilities with Linux (but good enough (via xp) for my purposes)...

    Read the article

  • C# wpf helix scale based mesh parenting using Transform3DGroup

    - by Rick2047
    I am using https://helixtoolkit.codeplex.com/ as a 3D framework. I want to move black mesh relative to the green mesh as shown in the attached image below. I want to make green mesh parent to the black mesh as the change in scale of the green mesh also will result in motion of the black mesh. It could be partial parenting or may be more. I need 3D rotation and 3D transition + transition along green mesh's length axis for the black mesh relative to the green mesh itself. Suppose a variable green_mesh_scale causing scale for the green mesh along its length axis. The black mesh will use that variable in order to move along green mesh's length axis. How to go about it. I've done as follows: GeometryModel3D GreenMesh, BlackMesh; ... double green_mesh_scale = e.NewValue; Transform3DGroup forGreen = new Transform3DGroup(); Transform3DGroup forBlack = new Transform3DGroup(); forGreen.Children.Add(new ScaleTransform3D(new Vector3D(1, green_mesh_scale , 1))); // ... transforms for rotation n transition GreenMesh.Transform = forGreen ; forBlack = forGreen; forBlack.Children.Add(new TranslateTransform3D(new Vector3D(0, green_mesh_scale, 0))); BlackMesh.Transform = forBlack; The problem with this is the scale transform will also be applied to the black mesh. I think i just need to avoid the scale part. I tried keeping all the transforms but scale, on another Transform3DGroup variable but that also not behaving as expected. Can MatrixTransform3D be used here some how? Also please suggest if this question can be posted somewhere else in stackexchange.

    Read the article

  • How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions

    - by Eric Z Goodnight
    Have a huge folder of images needing tweaks? A few hundred adjustments may seem like a big, time consuming job—but read one to see how Photoshop can do repetitive tasks automatically, even if you don’t know how to program! Photoshop Actions are a simple way to program simple routines in Photoshop, and are a great time saver, allowing you to re-perform tasks over and over, saving you minutes or hours, depending on the job you have to work on. See how any bunch of images and even some fairly complicated photo tweaking can be done automatically to even hundreds of images at once. When Can I use Photoshop Actions? Photoshop actions are a way of recording the tools, menus, and keys pressed while using the program. Each time you use a tool, adjust a color, or use the brush, it can be recorded and played back over any file Photoshop can open. While it isn’t perfect and can get very confused if not set up correctly, it can automate editing hundreds of images, saving you hours and hours if you have big jobs with complex edits. The image illustrated above is a template for a polaroid-style picture frame. If you had several hundred images, it would actually be a simple matter to use Photoshop Actions to create hundreds of new images inside the frame in almost no time at all. Let’s take a look at how a simple folder of images and some Image editing automation can turn lots of work into a simple and easy job. Creating a New Action Actions is a default part of the “Essentials” panel set Photoshop begins with as a default. If you can’t see the panel button under the “History” button, you can find Actions by going to Window > Actions or pressing Alt + F9. Click the in the Actions Panel, pictured in the previous illustration on the left. Choose to create a “New Set” in order to begin creating your own custom Actions. Name your action set whatever you want. Names are not relevant, you’ll simply want to make it obvious that you have created it. Click OK. Look back in the layers panel. You’ll see your new Set of actions has been added to the list. Click it to highlight it before going on. Click the again to create a “New Action” in your new set. If you care to name your action, go ahead. Name it after whatever it is you’re hoping to do—change the canvas size, tint all your pictures blue, send your image to the printer in high quality, or run multiple filters on images. The name is for your own usage, so do what suits you best. Note that you can simplify your process by creating shortcut keys for your actions. If you plan to do hundreds of edits with your actions, this might be a good idea. If you plan to record an action to use every time you use Photoshop, this might even be an invaluable step. When you create a new Action, Photoshop automatically begins recording everything you do. It does not record the time in between steps, but rather only the data from each step. So take your time when recording and make sure you create your actions the way you want them. The square button stops recording, and the circle button starts recording again. With these basics ready, we can take a look at a sample Action. Recording a Sample Action Photoshop will remember everything you input into it when it is recording, even specific photographs you open. So begin recording your action when your first photo is already open. Once your first image is open, click the record button. If you’re already recording, continue on. Using the File > Place command to insert the polaroid image can be easier for Actions to deal with. Photoshop can record with multiple open files, but it often gets confused when you try it. Keep your recordings as simple as possible to ensure your success. When the image is placed in, simply press enter to render it. Select your background layer in your layers panel. Your recording should be following along with no trouble. Double click this layer. Double clicking your background layer will create a new layer from it. Allow it to be renamed “Layer 0” and press OK. Move the “polaroid” layer to the bottom by selecting it and dragging it down below “Layer 0” in the layers panel. Right click “Layer 0” and select “Create Clipping Mask.” The JPG image is cropped to the layer below it. Coincidentally, all actions described here are being recorded perfectly, and are reproducible. Cursor actions, like the eraser, brush, or bucket fill don’t record well, because the computer uses your mouse movements and coordinates, which may need to change from photo to photo. Click the to set your Photograph layer to a “Screen” blending mode. This will make the image disappear when it runs over the white parts of the polaroid image. With your image layer (Layer 0) still selected, navigate to Edit > Transform > Scale. You can use the mouse to resize your Layer 0, but Actions work better with absolute numbers. Visit the Width and Height adjustments in the top options panel. Click the chain icon to link them together, and adjust them numerically. Depending on your needs, you may need to use more or less than 30%. Your image will resize to your specifications. Press enter to render, or click the check box in the top right of your application. + Click on your bottom layer, or “polaroid” in this case. This creates a selection of the bottom layer. Navigate to Image > Crop in order to crop down to your bottom layer selection Your image is now resized to your bottommost layer, and Photoshop is still recording to that effect. For additional effect, we can navigate to Image > Image Rotation > Arbitrary to rotate our image by a small tilt. Choosing 3 degrees clockwise , we click OK to render our choice. Our image is rotated, and this step is recorded. Photoshop will even record when you save your files. With your recording still going, find File > Save As. You can easily tell Photoshop to save in a new folder, other than the one you have been working in, so that your files aren’t overwritten. Navigate to any folder you wish, but do not change the filename. If you change the filename, Photoshop will record that name, and save all your images under whatever you type. However, you can change your filetype without recording an absolute filename. Use the pulldown tab and select a different filetype—in this instance, PNG. Simply click “Save” to create a new PNG based on your actions. Photoshop will record the destination and the change in filetype. If you didn’t edit the name of your file, it will always use the variable filename of any image you open. (This is very important if you want to edit hundreds of images at once!) Click File > Close or the red “X” in the corner to close your filetype. Photoshop can record that as well. Since we have already saved our image as a JPG, click “NO” to not overwrite your original image. Photoshop will also record your choice of “NO” for subsequent images. In your Actions panel, click the stop button to complete your action. You can always click the record button to add more steps later, if you want. This is how your new action looks with its steps expanded. Curious how to put it into effect? Read on to see how simple it is to use that recording you just made. Editing Lots of Images with Your New Action Open a large number of images—as many as you care to work with. Your action should work immediately with every image on screen, although you may have to test and re-record, depending on how you did. Actions don’t require any programming knowledge, but often can get confused or work in a counter-intuitive way. Record your action until it is perfect. If it works once without errors, it’s likely to work again and again! Find the “Play” button in your Actions Panel. With your custom action selected, click “Play” and your routine will edit, save, and close each file for you. Keep bashing “Play” for each open file, and it will keep saving and creating new files until you run out of work you need to do. And in mere moments, a complicated stack of work is done. Photoshop actions can be very complicated, far beyond what is illustrated here, and can even be combined with scripts and other actions, creating automated creation of potentially very complex files, or applying filters to an entire portfolio of digital photos. Have questions or comments concerning Graphics, Photos, Filetypes, or Photoshop? Send your questions to [email protected], and they may be featured in a future How-To Geek Graphics article. Image Credits: All images copyright Stephanie Pragnell and author Eric Z Goodnight, protected under Creative Commons. Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar

    Read the article

  • Upcoming Webcast: Use Visual Decision Making To Boost the Pace of Product Innovation – October 24, 2013

    - by Gerald Fauteux
    See More, Do More Use Visual Decision Making To Boost the Pace of Product Innovation   Join a Free Webcast hosted by Oracle, featuring QUALCOMM Click here to register for this webcast   Keeping innovation ahead of shrinking product lifecycles continues to be a challenge in today’s fast-paced business environment, but new visualization techniques in the product design and development process are helping businesses widen the gap further.  Innovative visualization methods, including Augmented Business Visualization, can be powerful differentiators for business leaders, especially when it comes to accelerating product cycles.   Don’t miss this opportunity to discover how visualization tied to PLM can help empower visual decision making and enhance productivity across your organization.  See more and do more with the power of Oracle. Join solution experts from Oracle and special guest, Ravi Sankaran, Sr. Staff Systems Analyst, QUALCOMM to discuss how visual decision making can help efficiently ramp innovation efforts throughout the product lifecycle: Advance collaboration with universal access across all document types with robust security measures in place Synthesize product information quickly like cost, quality, compliance, etc. in a highly visual form from multiple sources in a single visual and actionable environment Increase productivity by rendering documents in the appropriate context of specific business processes Drive modern business transformation with new collaboration methods such as Augmented Business Visualization . Date: Thursday, October 24, 2013 Time: 10:00 a.m. PDT / 1:00 p.m. EDT Click here to register for this FREE event

    Read the article

  • Master-Details with RadGridView for Silverlight 4, WCF RIA Services RC2 and Entity Framework 4.0

    I have prepared a sample project with the Silverlight 4 version of RadGridView released yesterday. The sample project was created with Visual Studio 2010, WCF RIA Services RC 2 for Visual Studio 2010, and ADO.NET Entity Framework (.NET 4). I have decided to use the SalesOrderHeader and SalesOrderDetails tables from the Adventure Works Database, because they provide the perfect one-to-many relationship: I will not go over the steps for creating the ADO.NET Entity Data Model and the Domain Service Class. In case you are not familiar with them, you should start with Brad Abrams series of blog posts and read this blog after that. To enable the master-details relationship we need to modify two things. First of all we need to include the automatic retrieval of the child entities in the domain service class. We do this by using the Include method: 1: public IQueryable<SalesOrderHeader> GetSalesOrderHeaders()...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How does the PPA fit into the scenario of publishing an application to the Ubuntu Software Center?

    - by Mridang Agarwalla
    I've been going through docs for the past couple of hours but I haven't understood what the PPA is? I have a cross-platform Java application that I'd like to publish to the Ubuntu Software Center. My application is open-source and I'm using Github. Apparently, publishing applications to the store isn't as simple as uploading a deb package - am I right? I need to create an account on Launchpad and put all my code there. I don't intend to move from Git to Bzr merely for the sake of publishing to the app store but luckily, one is able to set up source-code mirroring from Github to Launchpad. Since my application is still very premature, it'll have updates fairly often. When I build my application on my machine, do I simply go my Ubuntu App Developer page and upload the new DEB package or do they build my application from source? What exactly is the PPA for? I don't think I'll need too many of the Launchpad features so I'd like to stick to Github if possible. (Publishing for Ubuntu really isn't trivial. I can see why there are so many developers out there who haven't published their applications to the Ubuntu Software Center. Publishing an Android applications has been the easiest so far.)

    Read the article

  • Ajax Control Toolkit May 2012 Release

    - by Stephen.Walther
    I’m happy to announce the May 2012 release of the Ajax Control Toolkit. This newest release of the Ajax Control Toolkit includes a new file upload control which displays file upload progress. We’ve also added several significant enhancements to the existing HtmlEditorExtender control such as support for uploading images and Source View. You can download and start using the newest version of the Ajax Control Toolkit by entering the following command in the Library Package Manager console in Visual Studio: Install-Package AjaxControlToolkit Alternatively, you can download the latest version of the Ajax Control Toolkit from CodePlex: http://AjaxControlToolkit.CodePlex.com The New Ajax File Upload Control The most requested new feature for the Ajax Control Toolkit (according to the CodePlex Issue Tracker) has been support for file upload with progress. We worked hard over the last few months to create an entirely new file upload control which displays upload progress. Here is a sample which illustrates how you can use the new AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="01_FileUpload.aspx.cs" Inherits="WebApplication1._01_FileUpload" %> <html> <head runat="server"> <title>Simple File Upload</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager control. This control is required to use any of the controls in the Ajax Control Toolkit because this control is responsible for loading all of the scripts required by a control. The page also contains an AjaxFileUpload control. The UploadComplete event is handled in the code-behind for the page: namespace WebApplication1 { public partial class _01_FileUpload : System.Web.UI.Page { protected void ajaxUpload1_OnUploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save upload file to the file system ajaxUpload1.SaveAs(MapPath(filePath)); } } } The UploadComplete handler saves each uploaded file by calling the AjaxFileUpload control’s SaveAs() method with a full file path. Here’s a video which illustrates the process of uploading a file: Warning: in order to write to the Images folder on a production IIS server, you need Write permissions on the Images folder. You need to provide permissions for the IIS Application Pool account to write to the Images folder. To learn more, see: http://learn.iis.net/page.aspx/624/application-pool-identities/ Showing File Upload Progress The new AjaxFileUpload control takes advantage of HTML5 upload progress events (described in the XMLHttpRequest Level 2 standard). This standard is supported by Firefox 8+, Chrome 16+, Safari 5+, and Internet Explorer 10+. In other words, the standard is supported by the most recent versions of all browsers except for Internet Explorer which will support the standard with the release of Internet Explorer 10. The AjaxFileUpload control works with all browsers, even browsers which do not support the new XMLHttpRequest Level 2 standard. If you use the AjaxFileUpload control with a downlevel browser – such as Internet Explorer 9 — then you get a simple throbber image during a file upload instead of a progress indicator. Here’s how you specify a throbber image when declaring the AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="02_FileUpload.aspx.cs" Inherits="WebApplication1._02_FileUpload" %> <html> <head id="Head1" runat="server"> <title>File Upload with Throbber</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="MyThrobber" runat="server" /> <asp:Image id="MyThrobber" ImageUrl="ajax-loader.gif" Style="display:None" runat="server" /> </div> </form> </body> </html> Notice that the page above includes an image with the Id MyThrobber. This image is displayed while files are being uploaded. I use the website http://AjaxLoad.info to generate animated busy wait images. Drag-And-Drop File Upload If you are using an uplevel browser then you can drag-and-drop the files which you want to upload onto the AjaxFileUpload control. The following video illustrates how drag-and-drop works: Remember that drag-and-drop will not work on Internet Explorer 9 or older. Accepting Multiple Files By default, the AjaxFileUpload control enables you to upload multiple files at a time. When you open the file dialog, use the CTRL or SHIFT key to select multiple files. If you want to restrict the number of files that can be uploaded then use the MaximumNumberOfFiles property like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" MaximumNumberOfFiles="1" runat="server" /> In the code above, the maximum number of files which can be uploaded is restricted to a single file. Restricting Uploaded File Types You might want to allow only certain types of files to be uploaded. For example, you might want to accept only image uploads. In that case, you can use the AllowedFileTypes property to provide a list of allowed file types like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" AllowedFileTypes="jpg,jpeg,gif,png" runat="server" /> The code above prevents any files except jpeg, gif, and png files from being uploaded. Enhancements to the HTMLEditorExtender Over the past months, we spent a considerable amount of time making bug fixes and feature enhancements to the existing HtmlEditorExtender control. I want to focus on two of the most significant enhancements that we made to the control: support for Source View and support for uploading images. Adding Source View Support to the HtmlEditorExtender When you click the Source View tag, the HtmlEditorExtender changes modes and displays the HTML source of the contents contained in the TextBox being extended. You can use Source View to make fine-grain changes to HTML before submitting the HTML to the server. For reasons of backwards compatibility, the Source View tab is disabled by default. To enable Source View, you need to declare your HtmlEditorExtender with the DisplaySourceTab property like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="05_SourceView.aspx.cs" Inherits="WebApplication1._05_SourceView" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head id="Head1" runat="server"> <title>HtmlEditorExtender with Source View</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:TextBox id="txtComments" TextMode="MultiLine" Columns="60" Rows="10" Runat="server" /> <ajaxToolkit:HtmlEditorExtender id="HEE1" TargetControlID="txtComments" DisplaySourceTab="true" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager, TextBox, and HtmlEditorExtender control. The HtmlEditorExtender extends the TextBox so that it supports rich text editing. Notice that the HtmlEditorExtender includes a DisplaySourceTab property. This property causes a button to appear at the bottom of the HtmlEditorExtender which enables you to switch to Source View: Note: when using the HtmlEditorExtender, we recommend that you set the DOCTYPE for the document. Otherwise, you can encounter weird formatting issues. Accepting Image Uploads We also enhanced the HtmlEditorExtender to support image uploads (another very highly requested feature at CodePlex). The following video illustrates the experience of adding an image to the editor: Once again, for backwards compatibility reasons, support for image uploads is disabled by default. Here’s how you can declare the HtmlEditorExtender so that it supports image uploads: <ajaxToolkit:HtmlEditorExtender id="MyHtmlEditorExtender" TargetControlID="txtComments" OnImageUploadComplete="MyHtmlEditorExtender_ImageUploadComplete" DisplaySourceTab="true" runat="server" > <Toolbar> <ajaxToolkit:Bold /> <ajaxToolkit:Italic /> <ajaxToolkit:Underline /> <ajaxToolkit:InsertImage /> </Toolbar> </ajaxToolkit:HtmlEditorExtender> There are two things that you should notice about the code above. First, notice that an InsertImage toolbar button is added to the HtmlEditorExtender toolbar. This HtmlEditorExtender will render toolbar buttons for bold, italic, underline, and insert image. Second, notice that the HtmlEditorExtender includes an event handler for the ImageUploadComplete event. The code for this event handler is below: using System.Web.UI; using AjaxControlToolkit; namespace WebApplication1 { public partial class _06_ImageUpload : System.Web.UI.Page { protected void MyHtmlEditorExtender_ImageUploadComplete(object sender, AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save uploaded file to the file system var ajaxFileUpload = (AjaxFileUpload)sender; ajaxFileUpload.SaveAs(MapPath(filePath)); // Update client with saved image path e.PostedUrl = Page.ResolveUrl(filePath); } } } Within the ImageUploadComplete event handler, you need to do two things: 1) Save the uploaded image (for example, to the file system, a database, or Azure storage) 2) Provide the URL to the saved image so the image can be displayed within the HtmlEditorExtender In the code above, the uploaded image is saved to the ~/Images folder. The path of the saved image is returned to the client by setting the AjaxFileUploadEventArgs PostedUrl property. Not surprisingly, under the covers, the HtmlEditorExtender uses the AjaxFileUpload. You can get a direct reference to the AjaxFileUpload control used by an HtmlEditorExtender by using the following code: void Page_Load() { var ajaxFileUpload = MyHtmlEditorExtender.AjaxFileUpload; ajaxFileUpload.AllowedFileTypes = "jpg,jpeg"; } The code above illustrates how you can restrict the types of images that can be uploaded to the HtmlEditorExtender. This code prevents anything but jpeg images from being uploaded. Summary This was the most difficult release of the Ajax Control Toolkit to date. We iterated through several designs for the AjaxFileUpload control – with each iteration, the goal was to make the AjaxFileUpload control easier for developers to use. My hope is that we were able to create a control which Web Forms developers will find very intuitive. I want to thank the developers on the Superexpert.com team for their hard work on this release.

    Read the article

  • Regex syntax question - trying to understand

    - by Asaf Chertkoff
    i don't know if this question belong here or no, but it is worth a shot. i'm a self taught php programmer and i'm only now starting to grasp the regex stuff. i'm pretty aware of its capabilities when it is done right, but this is something i need to dive in too. so maybe someone can help me, and save me so hours of experiment. i have this string: here is the <a href="http://www.google.com" class="ttt" title="here"><img src="http://www.somewhere.com/1.png" alt="some' /></a> and there is <a href="#not">not</a> a chance... now, i need to perg_match this string and search for the a href tag that has an image in it, and replace it with the same tag with a small difference: after the title attribute inside the tag, i'll want to add a rel="here" attribute. of course, it should ignore links (a href's) that doesn't have img tag inside. help will be appreciated, thanks.

    Read the article

  • Antec Fusion Black LCD won't turn off on system shutdown

    - by Niklas
    Hi! I've been struggling a while with getting the iMON LCD/IR-receiver on my Antec Fusion Black case to shutdown together with the system (XBMC Live 10 - ubuntu based). But it won't. When it's turned off the LCD still lights up the whole room. Many have proposed the "solution" of setting the machine into hibernation instead but that however won't work for me, since I'm unable to suspend my system. It is the LCD/IR-module that prevents me from suspending and I haven't found a solution to properly unload it on suspending (it's way above my linux knowledge). I need help with getting the display to turn off the backlight when the system is turned off. Can anyone please help me? If anyone also has the knowledge on how to get the eject function to work on my Antec Veris rm200 remote I would be very grateful, I was told about that it could get fixed with irexec but I do not know how since I haven't been able to find a good tutorial on the subject. Thank you for helping me!

    Read the article

  • Coders For Charities

    - by Robz / Fervent Coder
    Last weekend I had the opportunity to give back to the community doing what I love. As geeks we don’t usually have this opportunity. The event is called Coders 4 Charities (C4C) and it’s a grueling weekend of coding for nearly 30 hours over the weekend. When you finish you get to present to the charity and all of the other groups what you have completed. From the site: Coders For Charities is a 3-day charity event that pairs charities and local software developers. Charities often do not have the funds to implement a new website or intranet or database solution. Software developers often do not volunteer for charities because their skills do not apply. This event is the perfect marriage of these two needs; software developers volunteering their time to help charities better serve their community though the latest technology! The actual event was lined with multiple charities and about 50 developers, designers, business analysts, etc, each working with a different charity to come up with a solution that they could implement in less than 3 days. C4C provided a place and food for us so that we wouldn’t have to leave much during the time we had to implement our solution. They also provided games like Rock Band so we could get away and clear our minds for a few moments if necessary. I don’t think we made it down there to play, but the food and drinks were a huge help for us. The charity we we picked was Harvest Home. They had a need for an online intranet site where they could track membership and gardening. Over the next few days we worked on a site we could give them. Below is a screen shot with private data marked out. It was an awesome and humbling experience to be able to give back to a charity and I’m happy I was a part of it. I would definitely do it again. How often do we get to use our abilities to volunteer our time to a charity?

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • SQL Azure Pricing

    - by kaleidoscope
    Microsoft’s pricing for SQL Server in the cloud, SQLAzure has been announced: $9.99   per month for 0 – 1GB $99.99 per month up to 10GB. There’s currently a 10GB maximum size cap for SQLAzure. For larger data storage needs, you’ll need to break the databases into smaller sizes. Scaling SQL Azure Applications If you think you’re going to need 100GB in the near term, it probably makes sense to break your application up into multiple separate databases from the get-go (10 x $9.99 = $99.99 anyway) and just make really sure none of the individual databases exceed 10GB. Beep Beep, Back That Database Up The bandwidth costs for SQL Azure are $.15 per GB of outbound bandwidth.  Assuming that you don’t compress the data before you pull it out of the cloud, that means daily backups of a 1GB database will add another $4.50 per month, and a 10GB database will add another $45/month.  Daily backups will cost about half of what your monthly service charges cost. It’s not completely clear from the press release, but if Microsoft follows Amazon’s pricing model, bandwidth between the Microsoft cloud services will not incur a cost.  That would mean it might make sense to spin up an Windows Azure computing application for $.12 per hour, use that application to compress your SQL Azure database, and then send the compressed data off to Azure storage for backup.  That would eliminate the data in/out costs, and minimize the Azure storage costs ($.15/GB).  Database administrators would back up their SQL Azure data to Azure Storage, keep a history of backups there, and restore them to SQL Azure faster when needed. Of course, there’s no native backup support in SQL Azure, and it’s not clear whether Windows Azure will include tools like SQL Server Integration Services. More details can be found at http://www.brentozar.com/archive/2009/07/sql-azure-pricing-10-for-1gb-100-for-10gb/   Anish, S

    Read the article

  • How much Ruby should I learn before moving to Rails?

    - by Kevin
    Just a quick question.. I can never get a definitive answer when googling this, either. Some people say you can learn Rails without knowing any Ruby, but at some point you'll run into a brick wall and wish you knew Ruby and will have to go back to learn it..and some say to learn the "basics" of Ruby before learning Rails and it will make your life that much easier.. My current knowledge is low. I'm not a beginner, but I'm not pro, either. I went through the Learn Python The Hard Way online book in about a month, but I stopped once I got to the OOP side of Python (I know booleans, elif/if/else/statements, for loops, while loops, functions) I agree with learning the "basics" of Ruby before learning Rails, but what exactly are the "basics" of Ruby? Would I need to learn the whole OOP side of Ruby before I went on to Rails? Or would I just need to learn the Ruby syntax up to where I learned Python (booleans, elif/if/else/statements, for loops, while loops, functions) before I went on to Rails? Thanks!

    Read the article

  • Partner Spotlight

    - by rituchhibber
    FADATA   Fadata officially became a WebCenter Content Specialized partner in the Adriatic region upon the successful completion of the corresponding Oracle specialization tests. ''Being recognized by Oracle and customers will greatly help our company in maintaining a high level of implementation services related to Oracle Web Center Content, as one of strategic products we are focused on. This certification, that our team is very proud of, will certainly help our company FADATA to gain additional advantage, competitiveness, and integrity in implementing Oracle Web Center Content solutions, both on current and future projects in the region'' according to Velimir Corovic and Marjan Nikolic from Fadata. Please put link www.fadata.bg, under Fadata Please also include logo after 1st sentence FISHBOWL SOLUTIONS Google Search Appliance Connector for Oracle WebCenter Content The Google Search Appliance (GSA) provides fast search for your intranet or website. Fishbowl Solutions provides a connector for the Google Search Appliance to integrate it with the Oracle WebCenter Content Server while retaining the security benefits of Oracle WebCenter Content. For more information and real customer example click here Fairview Health Services Case Study or Webinar recording SIGNUM Signum TTE, from Turkey and a Gold member of Oracle® PartnerNetwork (OPN), recently announced it has achieved OPN Specialized status for Oracle WebCenter Portal. Signum TTE which began operations in the IT sector in 2005, is an innovative software solution house focusing on two main issues. Signum TTE presents services and solutions on Oracle Middleware products and technologies and its own product "WinDesk Service Management".

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >