Search Results

Search found 27 results on 2 pages for 'brady'.

Page 1/2 | 1 2  | Next Page >

  • Replacement of Outlooksoft (SAP BPC) with Oracle EPM at Brady Corporation

    Nigel Youell, Product Marketing Director, Enterprise Performance Management Applications at Oracle discusses with Joe Bittorf, Project Manager at Brady Corporation why and how they embarked on this major project to replace SAP BPC with Oracle's Enterprise Performance Management Solution. Joe covers the outstanding improvements they have achieved in their financial close process, how they worked with Oracle Partner Emerging Solutions and the current project to further improve their planning and budgeting processes.

    Read the article

  • PHP regex to match sentences that contain a year

    - by zen
    I need a regular expression that will extract sentences from text that contain a year in them. Example text: Next, in 1988 the Bradys were back again for a holiday celebration, "A Very Brady Christmas". Susan Olsen (Cindy) would be missing from this reunion, Jennifer Runyon took her place. This was a two hour movie in which the Bradys got together to celebrate Christmas, introducing the world to the spouses and children of the Brady kids. This movie was the highest rated TV-movie of 1988. If the example text was variable $string, I need it to return: $sentenceWithYear[0] = Next, in 1988 the Bradys were back again for a holiday celebration, "A Very Brady Christmas". $sentenceWithYear[1] = This movie was the highest rated TV-movie of 1988. If it's possible to retain the year via regex, I'd use the year within the sentence and eventually insert the sentences into a database like: INSERT INTO table_name (year, sentence) VALUES ('$year', '$sentenceWithYear[x]')

    Read the article

  • Is having a 'home' navigation item on the home page negative to your sites SEO?

    - by Brady
    My work colleague has recently had conversations with some SEO consultants and after those conversations she has come to the conclusion that having a link to the home page on the home page will have a negative effect on the websites SEO. And because of this we are now building websites that don't have a home link show until you are on any page other than the home page. If the above argument is true then surely then if we are on the about page of a website we shouldn't show a navigation item for the page we are on, and that would the case for any other page of the website... So my question is: Does having a home navigation item on the home page have a negative effect on the websites SEO? And if not: Why has my colleague come to the above conclusion? Could she be misunderstanding something more important about home links on the home page regarding SEO?

    Read the article

  • Cloud-Burst 2012–Windows Azure Developer Conference in Sweden

    - by Alan Smith
    The Sweden Windows Azure Group (SWAG) will running “Cloud-Burst 2012”, a two-day Windows Azure conference hosted at the Microsoft offices in Akalla, near Stockholm on the 27th and 28th September, with an Azure Hands-on Labs Day at AddSkills on the 29th September. The event is free to attend, and will be featuring presentations on the latest Azure technologies from Microsoft MVPs and evangelists. The following presentations will be delivered on the Thursday (27th) and Friday (29th): · Connecting Devices to Windows Azure - Windows Azure Technical Evangelist Brady Gaster · Grid Computing with 256 Windows Azure Worker Roles - Connected System Developer MVP Alan Smith · ‘Warts and all’. The truth about Windows Azure development - BizTalk MVP Charles Young · Using Azure to Integrate Applications - BizTalk MVP Charles Young · Riding the Windows Azure Service Bus: Cross-‘Anything’ Messaging - Windows Azure MVP & Regional Director Christian Weyer · Windows Azure, Identity & Access - and you - Developer Security MVP Dominick Baier · Brewing Beer with Windows Azure - Windows Azure MVP Maarten Balliauw · Architectural patterns for the cloud - Windows Azure MVP Maarten Balliauw · Windows Azure Web Sites and the Power of Continuous Delivery - Windows Azure MVP Magnus Mårtensson · Advanced SQL Azure - Analyze and Optimize Performance - Windows Azure MVP Nuno Godinho · Architect your SQL Azure Databases - Windows Azure MVP Nuno Godinho   There will be a chance to get your hands on the latest Azure bits and an Azure trial account at the Hands-on Labs Day on Saturday (29th) with Brady Gaster, Magnus Mårtensson and Alan Smith there to provide guidance, and some informal and entertaining presentations. Attendance for the conference and Hands-on Labs Day is free, but please only register if you can make it, (and cancel if you cannot). Cloud-Burst 2012 event details and registration is here: http://www.azureug.se/CloudBurst2012/ Registration for Sweden Windows Azure Group Stockholm is here: swagmembership.eventbrite.com The event has been made possible by kind contributions from our sponsors, Knowit, AddSkills and Microsoft Sweden.

    Read the article

  • Strange DNS issue with internal Windows DNS

    - by Brady
    I've encountered a strange issue with our internal Windows DNS infrastructure. We have a website hosted on Amazon EC2 with the DNS running on Amazon Route 53. In the publicly facing DNS we have the wildcard record setup as an A record Alias pointing to an AWS Elastic Load Balancer sitting in front of our EC2 instances. For those who are not aware, the A record Alias behaves like a CNAME record, however no extra lookup is required on the client side (See http://docs.amazonwebservices.com/Route53/latest/DeveloperGuide/CreatingAliasRRSets.html for more information). We have a secondary domain that has the www subdomain as a CNAME pointing to a subdomain on the primary domain, which resolves against the wildcard entry. For example the subdomain www.secondary.com is a CNAME to sub1.primary.com, but there is no explicit entry for sub1.primary.com, so it resolves to wildcard record. This setup work without issue publicly. The issue comes in our internal DNS at our corporate office where we use the same primary domain for some internal only facing sites. In this setup we have two Active Directory DNS servers with one Server 2003 and one Server 2008 R2 instance. The zone is an AD integrated zone, but it is not the AD domain. In the internal DNS we have the wildcard record pointing to a third external domain, that is also hosted on Route 53 with an A record Alias pointing to the same ELB instance. For example, *.primary.com is a CNAME to tertiary.com, so in effect you have www.secondary.com as a CNAME to *.primary.com, which is a CNAME to tertiary.com. In this setup, attempting to resolve www.secondary.com will fail. Clearing the cache on the Server 2003 instance will allow it to resolve once, but subsequent attempts will fail. It fails even with a clean cache against the 2008 R2 server. It seems that only Windows clients are affected. A Mac running OSX Mountain Lion does not experience this issue. I'm even able to replicate the issue using nslookup. Against the 2003 server, with a freshly cleaned cache, I recieve the appropriate response from www.secondary.com: Non-authoritative answer: Name: subdomain.primary.com Address: x.x.x.x (Public IP) Aliases: www.secondary.com Subsequent checks simply return: Non-authoritative answer: Name: www.secondary.com If you set the type to CNAME you get the appropriate responses all the time. www.secondary.com gives you: Non-authoritative answer: www.secondary.com canonical name = subdomain.primary.com And subdomain.primary.com gives you: subdomain.primary.com canonical name = tertiary.com And setting type back to A gives you the appropriate response for tertiary.com: Non-authoritative answer: Name: tertiary.com Address: x.x.x.x (Public IP) Against the 2008 R2 server things are a little different. Even with a clean cache, www.secondary.com returns just: Non-authoritative answer: Name: www.secondary.com The CNAME records are returned appropriately. www.secondary.com returns: Non-authoritative answer: www.secondary.com canonical name = subdomain.primary.com And subdomain.primary.com gives you: subdomain.primary.com canonical name = tertiary.com tertiary.com internet address = x.x.x.x (Public IP) tertiary.com AAAA IPv6 address = x::x (Public IPv6) And setting type back to A gives you the appropriate response for tertiary.com: Non-authoritative answer: Name: tertiary.com Address: x.x.x.x (Public IP) Requests directly against subdomain.primary.com work correctly.

    Read the article

  • MAC OSX 10.5.8 need to save rsync password with ssh-copy-id

    - by Brady
    Hello all, I'll start by saying I'm very new to MAC but comfortable in using the command line thanks to using a linux a lot. I currently have rsync setup to run between a MAC OSX 10.5.8 server to a Linux Centos 5.5 Server. This is the command I'm running on the MAC server: rsync -avhe ssh "/Path/To/Data" [email protected]:data/ As it does it prompts for a password but I need it to save the password. After looking around I need to use: ssh-keygen -t dsa save the passkey and then move it over to the Linux server using: ssh-copy-id -i .ssh/id_dsa.pub [email protected] But ssh-copy-id doesnt seem to exist on the MAC server. How do I copy this key over? I've tried searching for the answer myself but the help seems to be all over the place for this.. Any help is greatly appreciated. Scott

    Read the article

  • how to notify a program of another program? dll? directory? path?

    - by Brady Trainor
    I am trying to experiment with GNUS email in Emacs, in Windows (EDIT: x64 bit). I've got it to work in Ubuntu, but struggling with it in Windows. From http://www.gnu.org/software/emacs/manual/html_mono/emacs-gnutls.html#Help-For-Users I read in second paragraph: This is a little bit trickier on the W32 (Windows) platform, but if you have the GnuTLS DLLs (available from http://sourceforge.net/projects/ezwinports/files/ thanks to Eli Zaretskii) in the same directory as Emacs, you should be OK. I have downloaded and unzipped the gnutls-3.0.9-w32-bin package, but am not sure what to do with it. I have tried putting it in Program Files (x86), which is "the same directory as Emacs". I have tried putting it in the emacs-24.3 folder. I consider merging all the folders in between the two, but am hesitant as that seems a difficult troubleshoot attempt compared to my knowledge on these matters. I think Emacs needs to somehow see the gnutls binaries and/or dlls. My knowledge is limited on this. I've also struggled to understand PATHs for sometime now, and am not sure if that approach is relevant here. FYI, the emacs directory contains folders labeled bin, etc, info, leim, lisp and site-lisp. The gnutls directory contains folder labeled bin, include, lib and share. Hmm, now I'm finding lots of links on adding paths. Still, I'm skeptical that I would only add gnutls.exe path, as it seems the dlls are needed. Some additional data for Ramhound's first comment I have been attempting the (require 'gnutls) route. This seems to be the most relevant parts in the log: Opening connection to imap.gmail.com via tls... gnutls.c: [1] (Emacs) GnuTLS library not found Opening TLS connection to `imap.gmail.com'... Opening TLS connection with `gnutls-cli --insecure -p 993 imap.gmail.com'...failed Opening TLS connection with `gnutls-cli --insecure -p 993 imap.gmail.com --protocols ssl3'...failed Opening TLS connection with `openssl s_client -connect imap.gmail.com:993 -no_ssl2 -ign_eof'...failed Opening TLS connection to `imap.gmail.com'...failed I am not sure what "in stallion" means. Emacs seems to have installed itself in program files (x86), so I assume it is 32 bit. I can try and figure out how to double check, but did not realize I would get such fast response time, and am headed out right now. I will try merging the files later tonight?

    Read the article

  • Rsync when run in cron doesnt work. Rsync between Mac Os x Server and Linux Centos

    - by Brady
    I have a working rsync setup between Mac OS X Server and Linux Centos when run manually in a terminal. I enter the rsync command, it asks for the password, I enter it and off it goes, runs and completes. Now I know thats working I set out to fully automate it via cron. First off I create an SSH authorized key by running this command on the Mac server: ssh-keygen -t dsa -b 1024 -f /Users/admin/Documents/Backup/rsync-key Entering the password and then confirming it. I then copy the rsync-key.pub file accross to the linux server and place in the rsync user .ssh folder and rename to authorized_keys: /home/philosophy/.ssh/authorized_keys I then make sure that the authorized_keys file is chmod 600 in the folder chmod 700. I then setup a shell script for cron to run: #!/bin/bash RSYNC=/usr/bin/rsync SSH=/usr/bin/ssh KEY=/Users/admin/Documents/Backup/rsync-key RUSER=philosophy RHOST=example.com RPATH=data/ LPATH="/Volumes/G Technology G Speed eS/Backup" $RSYNC -avz --delete --progress -e "$SSH -i $KEY" "$LPATH" $RUSER@$RHOST:$RPATH Then give the shell file execute permissions and then add the following to the crontab using crontab -e: 29 12 * * * /Users/admin/Documents/Backup/backup.sh I check my crontab log file after the above command should run and I get this in the log and nothing else: Feb 21 12:29:00 fileserver /usr/sbin/cron[80598]: (admin) CMD (/Users/admin/Documents/Backup/backup.sh) So I asume everything has run as it should. But when I check the remote server no files have been copied accross. If I run the backup.sh file in a terminal as normal it still prompts for a password but this time its through the Mac Key chain system rather than typing into the console window. With the Mac Key Chain I can set it to save the password so that it doesnt ask for it again but Im sure when run with cron this password isnt picked up. This is where I'm asuming where rsync in cron is failing because it needs a password to connect but I thought the whole idea of making the SSH keys was to prevent the use of a password. Have I missed a step or done something wrong here? Thanks Scott

    Read the article

  • How to setup Mac server to use two gateways

    - by Brady
    I recently asked this question: How to set Mac server to use different Gateway for internet bound traffic The answer given works but has presented me with another issue that I didnt make clear in that question. Here is my network layout as it stands: At the moment outside staff members use some services on the existing internet 1 link. Those services are hosted by the Mac server. If I change the gateway of the Mac server to the second modem those outside staff lose visabilty on those services. Now I dont know how to go about solving this issue. I want the second link to be used when the Mac server goes to rsync data offsite but everything else use link one. How do I do this? Thanks Scott EDIT: This has been resolved by setting the default gateway on the Mac server to 192.168.1.254 Thus leaving everything on the network as it was before. but to get the Mac server to use the other link for rsync I've added a route to the Mac server to route traffic to the rsync server through the second gateway. sudo route add -net {server IP's}/{Netmask} 192.168.1.1 I've awarded the answer to gravyface for pointing me to a post on how to make this route persistant in Mac

    Read the article

  • Bash script to keep last x number of files and delete the rest

    - by Brady
    I have this bash script which nicely backs up my database on a cron schedule: #!/bin/sh PT_MYSQLDUMPPATH=/usr/bin PT_HOMEPATH=/home/philosop PT_TOOLPATH=$PT_HOMEPATH/philosophy-tools PT_MYSQLBACKUPPATH=$PT_TOOLPATH/mysql-backups PT_MYSQLUSER=********* PT_MYSQLPASSWORD="********" PT_MYSQLDATABASE=********* PT_BACKUPDATETIME=`date +%s` PT_BACKUPFILENAME=mysqlbackup_$PT_BACKUPDATETIME.sql.gz PT_FILESTOKEEP=14 $PT_MYSQLDUMPPATH/mysqldump -u$PT_MYSQLUSER -p$PT_MYSQLPASSWORD --opt $PT_MYSQLDATABASE | gzip -c > $PT_MYSQLBACKUPPATH/$PT_BACKUPFILENAME Problem with this is that it will keep dumping the backups in the folder and not clean up old files. This is where the variable PT_FILESTOKEEP comes in. Whatever number this is set to thats the amount of backups I want to keep. All backups are time stamped so by ordering them by name DESC will give you the latest first. Can anyone please help me with the rest of the BASH script to add the clean up of files? My knowledge of BASH is lacking and I'm unable to piece together the code to do the rest.

    Read the article

  • vim default save is no file type? default setting advisable?

    - by Brady Trainor
    I just came across a problem that I seemed to have found a solution, but was a little surprised by the issue. In gVim, when I save a new document (new from "within" gVim), ala :w afile I realized it saves with I guess no file format, and thus is not visible in iPhone's PlainText app. Solution seems to be, save using :w afile.txt then the problem seems solved. Is this a good way to solve it? Should I change a default somewhere, either in Windows or _gvimrc file? I may consider doing some TeX'ing in vim at some point, so perhaps a default that allows for overriding in saving, and doesn't force txt when opening "any of all" documents.

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Understanding Column Properties for a SQL Server Table

    Designing a table can be a little complicated if you don’t have the correct knowledge of data types, relationships, and even column properties. In this tip, Brady Upton goes over the column properties and provides examples. "It really helped us isolate where we were experiencing a bottleneck"- John Q Martin, SQL Server DBA. Get started with SQL Monitor today to solve tricky performance problems - download a free trial

    Read the article

  • Having trouble passing text from MySQL to a Javascript function using PHP

    - by Nathan Brady
    So here's the problem. I have data in a MySQL DB as text. The data is inserted via mysql_real_escape_string. I have no problem with the data being displayed to the user. At some point I want to pass this data into a javascript function called foo. // This is a PHP block of code // $someText is text retrieved from the database echo "<img src=someimage.gif onclick=\"foo('{$someText}')\">"; If the data in $someText has line breaks in it like: Line 1 Line 2 Line 3 The javascript breaks because the html output is <img src=someimage.gif onclick="foo('line1 line2 line3')"> So the question is, how can I pass $someText to my javascript foo function while preserving line breaks and carriage returns but not breaking the code? =========================================================================================== After using json like this: echo "<img src=someimage.gif onclick=\"foo($newData)\">"; It is outputting HTML like this: onclick="foo("line 1<br \/>\r\nline 2");"> Which displays the image followed by \r\nline 2");"

    Read the article

  • What key on a keyboard can be detected in the browser but won't show up in a text input?

    - by Brady
    I know this one is going to sound weird but hear me out. I have a heart rate monitor that is hooked up like a keyboard to the computer. Each time the heart rate monitor detects a pulse it sends a key stroke. This keystroke is then detected by a web based game running in the browser. Currently I'm sending the following keystroke: (`) In the browser base game I'm detecting the following key fine and if when the user is on any data input screens the (`) character is ignored using a bit of JavaScript. The problem comes when I leave the browser and go back to using the Operating system in other ways the (`) starts appearing everywhere. So my question is: Is there a key that can be sent via the keyboard that is detectable in the browser but wont have any notable output on the screen if I switch to other applications. I'm kind of looking for a null key. A key that is detectable in the browser but has no effect to the browser or any other system application if pressed.

    Read the article

  • How to display part of an image for the specific width and height?

    - by Brady Chu
    Recently I participated in a web project which has a huge large of images to handle and display on web page, we know that the width and height of images end users uploaded cannot be control easily and then they are hard to display. At first, I attempted to zoom in/out the images to rearch an appropriate presentation, and I made it, but my boss is still not satisfied with my solution, the following is my way: var autoResizeImage = function(maxWidth, maxHeight, objImg) { var img = new Image(); img.src = objImg.src; img.onload = function() { var hRatio; var wRatio; var Ratio = 1; var w = img.width; var h = img.height; wRatio = maxWidth / w; hRatio = maxHeight / h; if (maxWidth == 0 && maxHeight == 0) { Ratio = 1; } else if (maxWidth == 0) { if (hRatio < 1) { Ratio = hRatio; } } else if (maxHeight == 0) { if (wRatio < 1) { Ratio = wRatio; } } else if (wRatio < 1 || hRatio < 1) { Ratio = (wRatio <= hRatio ? wRatio : hRatio); } if (Ratio < 1) { w = w * Ratio; h = h * Ratio; } w = w <= 0 ? 250 : w; h = h <= 0 ? 370 : h; objImg.height = h; objImg.width = w; }; }; This way is only intended to limit the max width and height for the image so that every image in album still has different width and height which are still very urgly. And right at this minute, I know we can create a DIV and use the image as its background image, this way is too complicated and not direct I don't want to take. So I's wondering whether there is a better way to display images with the fixed width and height without presentation distortion? Thanks.

    Read the article

  • Win32 DLL importing issues (DllMain)

    - by brady
    I have a native DLL that is a plug-in to a different application (one that I have essentially zero control of). Everything works just great until I link with an additional .lib file (links my DLL to another DLL named ABQSMABasCoreUtils.dll). This file contains some additional API from the parent application that I would like to utilize. I haven't even written any code to use any of the functions exported but just linking in this new DLL is causing problems. Specifically I get the following error when I attempt to run the program: The application failed to initialize properly (0xc0000025). Clock on OK to terminate the application. I believe I have read somewhere that this is typically due to a DllMain function returning FALSE. Also, the following message is written to the standard output: ERROR: Memory allocation attempted before component initialization I am almost 100% sure this error message is coming from the application and is not some type of Windows error. Looking into this a little more (aka flailing around and flipping every switch I know of) I linked with /MAP turned on and found this in the resulting .map file: 0001:000af220 ??3@YAXPEAX@Z 00000001800b0220 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af226 ??2@YAPEAX_K@Z 00000001800b0226 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af22c ??_U@YAPEAX_K@Z 00000001800b022c f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af232 ??_V@YAXPEAX@Z 00000001800b0232 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll If I undecorate those names using "undname" they give the following (same order): void __cdecl operator delete(void * __ptr64) void * __ptr64 __cdecl operator new(unsigned __int64) void * __ptr64 __cdecl operator new[](unsigned __int64) void __cdecl operator delete[](void * __ptr64) I am not sure I understand how anything from ABQSMABasCoreUtils.dll can exist within this .map file or why my DLL is even attempting to load ABQSMABasCoreUtils.dll if I don't have any code that references this DLL. Can anyone help me put this information together and find out why this isn't working? For what it's worth I have confirmed via "dumpbin" that the parent application imports the same DLL (ABQSMABasCoreUtils.dll), so it is being loaded no matter what. I have also tried delay loading this DLL in my DLL but that did not change the results.

    Read the article

  • Twitter Typeahead only shows only 5 results

    - by user3685388
    I'm using the Twitter Typeahead version 0.10.2 autocomplete but I'm only receiving 5 results from my JSON result set. I can have 20 or more results but only 5 are shown. What am I doing wrong? var engine = new Bloodhound({ name: "blackboard-names", prefetch: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false } }, remote: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false }, }, datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'), queryTokenizer: Bloodhound.tokenizers.whitespace }); var promise = engine.initialize(); promise .done(function() { console.log("done"); }) .fail(function() { console.log("fail"); }); $("#Impersonate").typeahead({ minLength: 2, highlight: true}, { name: "blackboard-names", displayKey: 'value', source: engine.ttAdapter() }).bind("typeahead:selected", function(obj, datum, name) { console.log(obj, datum, name); alert(datum.id); }); Data: [ { "id": "1", "value": "Adams, Abigail", "tokens": [ "Adams", "A", "Ad", "Ada", "Abigail", "A", "Ab", "Abi" ] }, { "id": "2", "value": "Adams, Alan", "tokens": [ "Adams", "A", "Ad", "Ada", "Alan", "A", "Al", "Ala" ] }, { "id": "3", "value": "Adams, Alison", "tokens": [ "Adams", "A", "Ad", "Ada", "Alison", "A", "Al", "Ali" ] }, { "id": "4", "value": "Adams, Amber", "tokens": [ "Adams", "A", "Ad", "Ada", "Amber", "A", "Am", "Amb" ] }, { "id": "5", "value": "Adams, Amelia", "tokens": [ "Adams", "A", "Ad", "Ada", "Amelia", "A", "Am", "Ame" ] }, { "id": "6", "value": "Adams, Arik", "tokens": [ "Adams", "A", "Ad", "Ada", "Arik", "A", "Ar", "Ari" ] }, { "id": "7", "value": "Adams, Ashele", "tokens": [ "Adams", "A", "Ad", "Ada", "Ashele", "A", "As", "Ash" ] }, { "id": "8", "value": "Adams, Brady", "tokens": [ "Adams", "A", "Ad", "Ada", "Brady", "B", "Br", "Bra" ] }, { "id": "9", "value": "Adams, Brandon", "tokens": [ "Adams", "A", "Ad", "Ada", "Brandon", "B", "Br", "Bra" ] } ]

    Read the article

  • Google I/O 2010 - Ignite Google I/O

    Google I/O 2010 - Ignite Google I/O Google I/O 2010 - Ignite Google I/O Tech Talks Brady Forrest, Krissy Clark, Ben Huh, Matt Harding, Clay Johnson, Bradley Vickers, Aaron Koblin, Michael Van Riper, Anne Veling, James Young Ignite captures the best of geek culture in a series of five-minute speed presentations. Each speaker gets 20 slides that auto-advance after 15 seconds. Check out last year's Ignite Google I/O. For all I/O 2010 sessions, please go to code.google.com/events/io/2010/sessions.html From: GoogleDevelopers Views: 206 3 ratings Time: 58:30 More in Science & Technology

    Read the article

  • La qualité de la recherche en ligne est-elle subjective ? Bing serait meilleur que Google d'après Search Engine Land

    La qualité de la recherche en ligne est-elle subjective ? Bing serait meilleur que Google d'après Search Engine Land Un manager marketing travaillant pour Search Engine Land, Conrad Saam, a lancé vingt recherches sur Bing et sur Google. A chaque fois, il envoyait des requêtes évitant les demandes les plus simples (comme trouver le site Internet d'une grosse société par exemple) et mélangeant plusieurs mots. Il a ainsi demandé aux deux moteurs de recherche de lui trouver des occurrences pour "l'avocat Tom Brady". Google s'est trompé sur l'un des résultats en proposant une page sur un célèbre quarterback du même nom. Suite à cela, l'homme a personnellement jugé de la qualité des réponses obtenues. Pour lui, Bing était meill...

    Read the article

  • How do you organise the cables in your racks?

    - by Tim
    I'm migrating my current half-size rack to a full-size rack and want to take the opportunity to reorganize and sort our spaghetti-hell of ethernet cables. What system do you use for organising your cables? Do you use any tracking software? Do you physically label the cables? What are you identifying when you label each end? Mac address? Port number? Asset number? What do you use to label them? I was looking at a hand held labeler, but the wrap around laser printer sheets might work. The Brady ID PAL seems good, but it's pricey. Ideas?

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • Upcoming presentations by me at Windows Azure Events

    - by ScottGu
    I recently blogged about a big wave of improvements we recently released for Windows Azure.  I also delivered a keynote on June 7th that discussed and demoed the enhancements – you can watch a recorded version of it online. Over the next few weeks I’ll be doing several more speaking events about Windows Azure in North America and Europe.  Below are details on some of the upcoming the events and how you can sign-up to attend one in person: Scottsdale, Arizona on June 19th, 2012 Attend this FREE all-day event in Scottsdale, Arizona on Tuesday, June 19th to learn more about Windows Azure, ASP.NET, Web API and SignalR.  I’ll be doing a 2 hour presentation on Windows Azure, followed by Scott Hanselman on ASP.NET and Web API, and Brady Gaster on SignalR.  Learn more about the event and register to attend here. Cambridge, United Kingdom on June 21st, 2012 Attend this FREE two-hour event in Cambridge (UK) the evening of Thursday, June 21st.  I’ll be covering the new Windows Azure release – expects lots of demos and audience participation. Learn more about the event and register to attend here. London, United Kingdom on June 22nd, 2012 Attend the FREE all-day Microsoft Cloud Day conference in London (UK) on Friday, June 22nd to learn about Windows Azure and Windows 8.  I’ll be kicking off the event with a two hour keynote, and will be followed by some other fantastic speakers. Learn more about the conference and register to attend here. TechEd Europe in Amsterdam, Netherlands on June 26th, 2012 I’ll be at TechEd Europe this year where I’ll be presenting on Windows Azure.  I’ll be in the general session keynote and also have a foundation track session on Windows Azure on Tuesday, June 26th. Learn more about TechEd Europe and register to attend here. Amsterdam, Netherlands on June 26th, 2012 Not attending TechEd Europe but near Amsterdam and still want to see me talk?  The good news is that the leaders of the Windows Azure User Group NL have setup a FREE event during the evening of Tuesday, June 26th where I’ll be presenting along with Clemens Vasters. Learn more about the event and register to attend here. Dallas, Texas on July 10th, 2012 I’ll be in Dallas, Texas on Tuesday, July 10th and presenting at a FREE all day Microsoft Cloud Summit.  I’ll kick off the day with a keynote, which will be followed by a great set of additional Windows Azure talks as well as a “Grill the Gu” Q&A session with me over lunch. Learn more about the event and register to attend here. Additional Events I’ll be doing many more events and talks in the months ahead – I’ll blog details of additional conferences/events I’m doing as they are fixed. Hope to see some of you at the above ones! Scott

    Read the article

1 2  | Next Page >