Search Results

Search found 11836 results on 474 pages for 'cloud dev'.

Page 286/474 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • Ok it has been pointed out to me

    - by Ratman21
    That it seems my blog is more of poor me or pity me or I deserve a job blog.   Hmmm I wont say, I have not wined here as I have used this blog to vent my frustration on the whole out of work thing (lack of money, self worth, family issues and the never end bills coming my way) but, it was also me trying to reach to others in the same boat as well as advertising, hay I am out here, employers.   It was also said, that I don’t have any thing listed here on me, like a cover letter or resume. Well there is but, it was so many months and post ago. Also what I had posted is not current. So here is my most current cover and resume.   Scott L Newman 45219 Dutton Way Callahan, Fl. 32011 To Whom It May Concern: I am really interested in the IT vacancie that you have listed for your company. Maybe I don’t have all the qualifications you want (hold on don’t hit delete yet) yet! But maybe I do, as I have over 20 + years experience in "IT” RIGHT NOW.   Read the rest of my cover and my resume. You will see what my “IT” skills are and it will Show that I can to this work! I can bring to your company along with my, can do attitude, a broad range of skills, including: Certified CompTIA A+, Security+  and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem  non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I don’t just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes but, I did it.) I would welcome the opportunity to further discuss this position with you. If you have questions or would like to schedule an interview, please contact me by phone at 904-879-4880 or on my cell 352-356-0945 or by e-mail at [email protected] or leave a message on my web site (http://beingscottnewman.webs.com/). I have enclosed/attached my resume for your review and I look forward to hearing from you.   Thank you for taking a moment to consider my cover letter and resume. I appreciate how busy you are. Sincerely, Scott L. Newman    Scott L. Newman 45219 Dutton Way, Callahan, FL 32011? H (904)879-4880 C (352)356-0945 ? [email protected] Web - http://beingscottnewman.webs.com/                                                       ______                                                                                       OBJECTIVE To obtain a Network Operation or Helpdesk position.     PROFILE Information Technology Professional with 20+ years of experience. Volunteer website creator and back-up sound technician at True Faith Christian Fellowship. CompTIA A+, Network+ and Security+ Certified.   TECHNICAL AND PROFESSIONAL SKILLS   §         Technical Support §         Frame Relay §         Microsoft Office Suite §         Inventory Management §         ISDN §         Windows NT/98/XP §         Client/Vendor Relations §         CICS §         Cisco Routers/Switches §         Networking/Administration §         RPG §         Helpdesk §         Website Design/Dev./Management §         Assembler §         Visio §         Programming §         COBOL IV §               EDUCATION ? New HorizonsComputerLearningCenter, Jacksonville, Florida – CompTIA A+, Security+ and Network+ Certified.             Currently working on CCNA Certification ?MottCommunity College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese     PROFESSIONAL             TrueFaithChristianFellowshipChurch – Callahan, FL, October 2009 – Present Web site Tech ·        Web site Creator/tech, back up song leader and back up sound technician. Note church web site is (http://ambassadorsforjesuschrist.webs.com/) U.S. Census (temp employee) Feb. 23 to March 8, 2010 ·        Enumerator for NassauCounty   ThomasCreekBaptistChurch – Callahan, FL,     June 2008 – September 2009 Churchsound and video technician      ·        sound and video technician           Fidelity National Information Services ? Jacksonville, FL ? February 01, 2005 to October 28, 2008 Client Server Dev/Analyst I ·        Monitored Multiple Debit Card sites, Check Authorization customers and the Card Auth system (AuthNet) for problems with the sites, connections, servers (on our LAN) and/or applications ·        Night (NOC) Network operator for a large Wide Area Network (WAN) ·        Monitored Multiple Check Authorization customers for problems with circuits, routers and applications ·        Resolved circuit and/or router issues or assist circuit carrier in resolving issue ·        Resolved application problems or assist application support in resolution ·        Liaison between customer and application support ·        Maintained and updated the NetOps Operation procedures Guide ·        Kept the listing of equipment on the raised floor updated ·        Involved in the training of all Night Check and Card server operation operators ·        FNIS acquired Certegy in 2005. Was one of 3 kept on.   Certegy ? St.Pete, FL ? August 31, 2003 to February 1, 2005 Senior NetOps Operator(FNIS acquired Certegy in 2005 all of above jobs/skills were same as listed in FNIS) ·        Converting Documentation to Adobe format ·        Sole trainer of day/night shift System Management Center operators (SMC) ·        Equifax spun off Card/Check Dept. as Certegy. Certegy terminated contract with EDS. One of six in the whole IT dept that was kept on.   EDS  (Certegy Account) ? St.Pete, FL ? July 1, 1999 to August 31, 2003 Senior NetOps Operator ·        Equifax outsourced the NetOps dept. to EDS in 1999. ·        Same job skills as listed above for FNIS.   Equifax ? St.Pete&Tampa, FL ? January 1, 1991 to July 1, 1999 NetOps/Tandem Operator ·        All of the above for FNIS, except for circuit and router issues ·        Operated, monitored and trouble shot Tandem mainframe and servers on LAN ·        Supported in the operation of the Print, Tape and Microfiche rooms ·        Equifax acquired TelaCredit in 1991.   TelaCredit ? Tampa, FL ? June 28, 1989 to January 1, 1991 Tandem Operator ·        Operated and monitored Tandem Non-stop systems for Card and Check Auths ·        Operated multiple high-speed Laser printers and Microfiche printers ·        Mounted, filed and maintained 18 reel-to-reel mainframe tape drives, cartridges tape drives and tape library.

    Read the article

  • Agile bug fixing - what's the preferred process for testing?

    - by Andrew Stephens
    When a bug is fixed, the dev set its status to "resolved" and the bug is reassigned back to the person that created it. In our case this is usually the product owner - we don't have dedicated testers. But what's a good process for controlling how/when the PO tests the software? Should he be given the latest build after each bug is resolved/checked-in? Or what about every morning? Or should he only receive a build at (or close to) the end of the iteration, to include all of that iteration's new functionality and bug fixes? We are using TFS by the way.

    Read the article

  • bash script to login to webpage

    - by Nathan Cazell
    I am trying to login into this page but I cannot for the life of me get it to work. I have to login to this site when i connect to my school's wifi in order to start a session. So far ive tried to use bash and cUrl to achieve this but have only achieved to give myself a headache. will cUrl work or am I on the wrong track? Any help is greatly appreciated! Thanks, N Here's what i tried: curl --cookie-jar cjar --output /dev/null http://campus.fsu.edu/webapps/login/ curl --cookie cjar --cookie-jar cjar \ --data 'username=foo' \ --data 'password=bar' \ --data 'service=http://campus.fsu.edu/webapps/login/' \ --data 'loginurl=http://campus.fsu.edu/webapps/login/bb_bb60/logincas.jsp' \ --location \ --output ~/loginresult.html \ http://campus.fsu.edu/webapps/login/

    Read the article

  • Ubuntu 12.04 booting into busybox after update

    - by Victor Alejandro Martinez
    Every time the system updates the kernel from 3.5.0-24 to 3.5.0-34 I get dropped into a busybox prompt at boot, but I can boot just fine using the previous kernel. I've tried all I know. I did a fsck.ext3 -f /dev/sdb2 using the alternate install CD. I've used boot-repair but to avail, I've checked for bad blocks but there are none. Should I purge the new kernel and use the old one instead? This is the output from boot-repair the first time, no purge. http://paste.ubuntu.com/5809230/

    Read the article

  • Lucid hangs at booting after kernel upgrade

    - by Thomas Deutsch
    This weekend, one of our servers running Lucid has installed some upgrades: libgcrypt11 1.4.4-5ubuntu2.1 linux-firmware 1.34.14 linux-image-2.6.32-41-generic 2.6.32-41.91 linux-libc-dev 2.6.32-41.91 Afterwards, it rebooted since this was a kernel upgrade. Now, it hangs at booting, after /scripts/init-bottom. init-bottom itself should not be the problem, the last line I can see is "done". So the problem has to be shortly after that. http://manpages.ubuntu.com/manpages/hardy/man8/initramfs-tools.8.html tells me, that the next step is procfs and sysfs are moved to the real rootfs and execution is turned over to the init binary which should now be found in the mounted rootfs. But I don't know how and where. The problem exists with older kernels too, and this one here doesn't fix the problem: http://www.tummy.com/journals/entries/jafo_20111003_160440 Anyone an idea?

    Read the article

  • org.openide.awt.ColorComboBox

    - by Geertjan
    It's the time of year when a lot of NetBeans Platform tutorials are being reviewed, revised, and rewritten. Today I'm looking at the NetBeans Platform Paint Application Tutorial. Suddenly I remembered seeing something in a recent API Changes document about a new class, ColorComboBox. That means I can make the tutorial a lot simpler, since Tim Boudreau's external ColorChooser.jar is now superfluous. Here's what the ColorComboBox looks like: It works perfectly. Of course, the nice thing about using that JAR was that it showed the user how to incorporate external JARs, but I'll make sure to make a note of that in the tutorial, along the lines of "If you don't like the NetBeans Platform color combobox, and would like to replace it with your own, such as Tim's ColorChooser.jar or a JavaFX color chooser, take the following steps." In short, if you're using NetBeans APIs, write this on your ceiling above your bed: http://bits.netbeans.org/dev/javadoc/apichanges.html, check that page regularly (mark it in your calendar to do first thing every Monday morning) and you'll be aware of the latest changes as they happen.

    Read the article

  • 2 min video about the SQL_Compare

    - by CatherineRussell
    It is nice to start blogging again! I am working on new project in a small company now. We do not have a full time database admin. I have to cover multiple roles: getting requirements, writing docs and creating diagrams, designing app, writing code, testing and DBA role. I am not a DBA. But, I have to do day to day database changes: adding new new columns and tables. Check out 2 min video about the SQL_Compare. This tool saves time by automatically comparing and synchronizing database schemas; eliminate mistakes migrating database changes from dev, to test, to production; speed up the deployment of new database schema updates; generate T-SQL scripts to update one database to match the schema of another; find and fix errors caused by differences between databases;  keeps an accurate history of all previous database records.  http://www.red-gate.com/products/SQL_Compare/index.htm

    Read the article

  • Ubuntu 14.04 Bluetooth Magic Mouse doesn't pair (No agent available)

    - by Rafael Xavier
    Mouse gets discovered. Although, it doesn't pair. /var/log/syslog: Apr 23 10:05:15 xavier bluetoothd[9873]: No agent available for request type 0 Apr 23 10:05:15 xavier bluetoothd[9873]: btd_event_request_pin: Operation not permitted Apr 23 10:05:15 xavier bluetoothd[9873]: Connection refused (111) It's worth saying that: Keyboard has paired and it's working just fine though; Mouse used to work just fine in Ubuntu 12.04, and 13, and it works when I reboot on Mac; This is the hci device. $ hcitool dev Devices: hci0 E0:F8:47:3A:3F:47 How to get it working?

    Read the article

  • Configure Jenkins and Tomcat using Puppet

    - by ex3v
    I'm trying to setup Spring dev environment (jenkins, tomcat) on Vagrant. What I really want to achieve is to limit config to only Puppet scripts, so I can share it with my colleagues and work together on the same environment. So far I managed to set up simple scripts to install jenkins, tomcat and so on, they work fine. What about jenkins configuration though? I'm pretty green in jenkins usage and configuration, not sure if I'm doing it the right way... I found this article and I want to migrate whole setup described in it to Puppet. Any ideas? Thanks.

    Read the article

  • Easy way of engaging non-programmers (i.e. designers) into using version control?

    - by Kevin
    What are some key ways of getting your team involved in using version control during development, web development or otherwise? I refuse to work without it, which means anyone involved in the project must also use it. It's just good practice. GUIs like Tower have helped, but the concept of it is either met with anger ('not my job!' kinda attitude), timidness, or just straight up not using it (using FTP instead, circumventing version control for say, dev or deployment). Edit: I should have clarified a little that I don't just mean images/PSDs.

    Read the article

  • Mount an image created from ddrescue

    - by oshirowanen
    I know this question has been asked before, but following those answers does not seem to work for me. I have created an image of a USB stick this is on my laptop harddrive. How do I mount this image? The command I used to create the image was: ddrescue --no-split /dev/sdb usb_recovered usb_recovery_log What am I supposed to do next? Mount it? Fix it then mount it? Mount it then fix it? And how? UPDATE: What I want to recover are the files in the image. How? I don't know as I have tried testdisk and it can't find partitions, and I have tried fdisk and it can't find a partition table in the image either.

    Read the article

  • Change permission to mount disk at rdesktop

    - by Tal
    I have ubuntu 10.04 and have installed rdesktop 1.7. I have run these commands: sudo umount /media/Tal sudo mount -t ntfs-3g -o uid=1000,gid=1000,umask=0000 /dev/sdb1 /media/Tal rdesktop -0 -r sound:local -f -u administrator -r clipboard:PRIMARYCLIPBOARD -r disk:tal=/media/Tal myip Tal is external hard drive connecting at USB in ntfs file system. I connect to windows 7 I see the hard drive in computer and I can access to files and create new files and folders, But when I try to copy a new file to a folder he show me an error message: You need permission perform this action Your require permission from computer's administrator to make changes to this folder Tal on my computername Disk from Remote Desktop Connection. I try chmod and chown too but I read I linux forum when it ntfs is no use. Some one can help me with my problem?

    Read the article

  • How To Specify Bitrate, Codec and Demultiplexing for VLC Video Capture or Recording

    - by Subhash
    I capture video from old TV tuner card - Pinnacle PCTV - using VLC. The video is from the Composite input and audio is from I guess the mixer or Line in. The command I use is: vlc v4l2:///dev/video0:normal=pal:width=720:height=576:input=1 :input-slave="alsa://hw:0,0" In VLC, I have enabled the Advanced Controls toolbar, which allows me to record videos when I want to. However, these videos are uncompressed - very big and play only with VLC. Totem throws the "Could not demultiplex stream" error. I need to convert them using WinFF to reduce their size and make them playable with Totem and other software. My question is whether I can configure the recording settings - the codecs and the bitrate, and also get the stream demultiplexed. If I pass any -sout parameter with command I get a "Segmentation fault". I use 64-bit Ubuntu 10.10.

    Read the article

  • Combining Shared Secret and Username Token – Azure Service Bus

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also flow through a username token so that in your listening WCF service you will be able to identify who sent the message. This could either be in the form of an application or a user depending on how you want to use your token. Prerequisites Before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.UN.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our username token like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element. Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the username token bits in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use userNameAuthentication and you can see that I have created my own custom username token validator.   This setup means that WCF will hand off to my class for validating the username token details. I have also added the serviceSecurityAudit element to give me a simple auditing of access capability. My UsernamePassword Validator The below picture shows you the details of the username password validator class I have implemented. WCF will hand off to this class when validating the token and give me a nice way to check the token credentials against an on-premise store. You have all of the validation features with a non-service bus WCF implementation available such as validating the username password against active directory or ASP.net membership features or as in my case above something much simpler.   The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a username token over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding. Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the username token with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This is the normal configuration to use a shared secret for accessing a Service Bus endpoint.   Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF service but this time we have also set the ClientCredentials to use the appropriate username and password which will be flown through the service bus and to our service which will validate them.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and username token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.UN.zip

    Read the article

  • Best Way To Develop Robust Cross-Platform Application?

    - by Clay
    Windows C programmer here (going back to 1992 and Windows95 back when it was called Windows93). Can function in C++, but mostly still a C programmer. Looking to build a cross-platform casual game. Very numbers heavy with only a few artistic embellishments and animations, so perhaps a development environment for business apps might be the best option. Or an easy-to-use 2D game dev platform. Target platforms: Windows, Mac, MS Tablet, iPhone, iPad, Android. I currently develop on Windows with Visual Studio 2012, but we could spend up to $50K on hardware/software/middleware if necessary. Not very competent getting open-source software working. Would rather pay the money and jump right into app development. Recommendations?

    Read the article

  • Getting a Conexant CX23885 TV Capture Card working

    - by Benny
    I'm new to Linux, and am trying to get my Capture Card working on 11.04. The only command that I know to run to find out any information is lspci, which tells me that I have 02:00.0 Multimedia video controller: Conexant Systems, Inc. CX23885 PCI Video and Audio Decoder (rev 04) I've looked at using Me TV, but haven't worked out how to configure it for my card, or what I need to do to get it running. I'm not fussed on what software I use to run the Capture Card, but I've currently got only Me TV installed. Edit: When I run tvtime, I get the following errors: videoinput: Cannot open capture device /dev/video0: No such file or directory mixer: find error: Success mixer: Can't open mixer default, mixer volume and mute unavailable. mixer: Can't open device default/Line, mixer volume and mute unavailable. Segmentation fault

    Read the article

  • creating the nodes for path finding during run time - more like path making and more

    - by bigbadbabybear
    i'm making my 1st game. i'm using javascript as i currently want to learn to make games without needing to learn another language but this is more of a general game dev question its a 2d turn-based tile/grid game. you can check it here http://www.patinterotest.tk/ it creates a movable area when you hover a player and it implements the A* algo for moving the player. The Problem: i want to make the 'dynamic movable area creation' already implement a limited number of steps for a player. The Questions: what is a good way to do this? is there another algorithm to use for this? the A* algorithm needs a start and destination, with what i want to do i don't have a destination or should i just limit the iteration of the A* algo to the steps variable? hopefully you understand the problem & questions easily

    Read the article

  • How these files can be accessed?

    - by harsh.singla
    The files can be accessed from every artifact, such as .bpel, .mplan, .task, .xsl, .wsdl etc., of the composite. 'oramds' protocol is used to access these files. You need to setup your adf-config.xml file in your dev environment or Jdeveloper to access these files from MDS. Here is the sample adf-config.xml. xmlns:sec="http://xmlns.oracle.com/adf/security/config" name="jdbc-url"/ name="metadata-path"/ credentialStoreLocation="../../src/META-INF/jps-config.xml"/ This adf-config.xml is located in directory named .adf/META-INF, which is in the application home of your project. Application home is the directory where .jws file of you application exists. Other than setting this file, you need not make any other changes in your project or composite to access MDS. After setting this up, you can create a new SOA-MDS connection in your Jdev. This enables you to have a resource pallet in which you can browse and choose the required file from MDS.

    Read the article

  • Tales of a corrupt SQL log

    Warning: Im a simple dev, not an all powerful DBA with godly powers. This morning, one of my sites was down and DNN reported a problem with the database.  A quick series of tests revealed that the culprit was a corrupted log file. Easy fix I said, I have daily backups so its just a mater of restoring a good copy of the database and log files.  Well, I found out thats not exactly true.  You see, for this database, I have daily file backups and these are not database backups created...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • I dont know how to run e2fsck or fsck and what are their differences

    - by Salvador
    My Kern.log file advise me to run e2fsck. Aug 30 14:10:11 ubuntu kernel: [ 122.378292] EXT4-fs (sda11): warning: maximal mount count reached, running e2fsck is recommended Aug 30 14:10:11 ubuntu kernel: [ 122.387488] EXT4-fs (sda11): mounted filesystem with ordered data mode. Opts: (null) /dev/sda11 is not mounted within my current OS (Ubuntu 10.04) I have known that e2fsck is a dangerous command when running against the root partition which is at the same hard disk as sda11. I would trust in this solution better than others: Can I run fsck or e2fsck when Linux file system is mounted?

    Read the article

  • EOL of MySQL Forge

    - by Keith Larson
    Forge was intended to be a community wiki resource for sharing information with each other.   However, over the last few years, we have seen Forge used less and less by MySQL Community, and more by spammers. What happened? MySQL Worklogs and MySQL Internals documentation will be moved to dev.mysql.com and with new anti spam measures in place. The MySQL Wiki, which was the primary focus of forge.mysql.com has been migrated to https://wikis.oracle.com/display/mysql MySQL Forge will EOL on August 1st 2012.

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • How to decide how much to charge for development?

    - by rik
    So two other friends and I are a very small game dev studio. So far we haven't released a game but we have 2 games almost ready to launch. A bigger studio saw our work and now they want to work with us; they need people to develop mobile games for them (iOS, Android). They want us to set the price for the projects (can't tell the specifics we signed a NDA). They will give us all the assets (graphics/sound) so we only have to code. And because they only work with Unity3D we have to learn it. How do we decide how much to charge for the projects?

    Read the article

  • Best Game Engine/Framework and Language for 2D actor/sprite intensive game

    - by Grungetastic
    I'm new to the game dev world. I have a rather large project in mind (I learn by setting myself challenges :P ) and I'm wondering what the best engine/framework/language is for a 2D game with thousands of sprites/actors on screen at a time. Bare metal type stuff. I need to still be able to zoom in and out with that many actors at once. This game will have no 3D elements. Any thoughts? Suggestions?

    Read the article

  • Preseed Partman: multiple partitions on one disk /tmp /data /usr swap

    - by Moritz
    trying to get preseeding on 12.04 64bit with what should be a basic setup to work: /dev/sda - the only drive beeing used / - rootfs - 100GB /boot - 1GB /tmp - 10GB /data - should take all available space swap - 10GB - d-i partman-auto/expert_recipe string \ boot-root :: \ 1000 50 1000 ext4 \ $primary{ } $bootable{ } \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /boot } \ . \ 500 1000 10000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /tmp } \ . \ 500 5000 100000000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ /data } \ . \ 64 2000 10000 linux-swap \ method{ swap } format{ } \ . \ 500 3000 100000 ext4 \ method{ format } format{ } \ use_filesystem{ } filesystem{ ext4 } \ mountpoint{ / } \ . If i only use the code for /boot,swap and / it works. Also i was wondering weather i have to specify some other recipe name than "boot-root", but trying "thisNameIsNotDefinedInPartman" the result was the same. The Error message displayed by the ubuntu installer is always "no root file system is defined" Thanks for your help, Moritz

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >