Search Results

Search found 66331 results on 2654 pages for 'pass data'.

Page 83/2654 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • SQL Saturday is Coming to Nashville! Won't You?

    - by KKline
    How 'Bout a Little Context? Let me be direct with you. I love SQL Saturday . If it were a woman , I'd marry it. (Avoiding all extraneous thoughts of what my real wife would say, etc etc). Check out this fun Flickr Feed from the recent SQL Saturday in Chicago or these picks by Jorge Segara ( blog | twitter ) to see the sort of fun that's in store. But who can argue with a day of free SQL Server training and a chance to network with great presenters and a wide swath of your peers? Keynotes are more...(read more)

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Presenting at the San Francisco SQL Server User Group - 12-Sep-2012

    - by RickHeiges
    I have a business trip scheduled out far enough in advance for a change. I was able to schedule a presentation at the San Francisco SQL Server User Group on Sep 12 about SQL Server Consolidation Strategies. If you will be in the SF area on Sep 12, I invite you to attend ar just drop by to say hello. You can find out more about the group at http://www.meetup.com/The-San-Francisco-SQL-Server-Meetup-Group/ Hope to see you there!...(read more)

    Read the article

  • SQL in the City (Charlotte) Wrap Up

    - by drsql
    Ok, it has been quite a while since the event, two weeks and a day to be exact, but I needed a rest before hitting Windows Live Writer again. Speaking is exhausting, traveling is exhausting, and well, I replaced my laptop and had to get all of my software back together. (Between Windows 8.1 sync features, Dropbox and Skydrive, it has never been easier…but I digress.) There are plenty of great vendors out there, but one of my favorites has always been Red-Gate. I have written half of a book with them,...(read more)

    Read the article

  • Logical and Physical Modeling for Analytical Applications

    - by Dejan Sarka
    I am proud to announce that my first course for Pluralsight is released. The course title is Logical and Physical Modeling for Analytical Applications. Here is the description of the course. A bad data model leads to an application that does not perform well. Therefore, when developing an application, you should create a good data model from the start. However, even the best logical model can’t help when the physical implementation is bad. It is also important to know how SQL Server stores and accesses data, and how to optimize the data access. Database optimization starts by splitting transactional and analytical applications. In this course, you learn how to support analytical applications with logical design, get understanding of the problems with data access for queries that deal with large amounts of data, and learn about SQL Server optimizations that help solving these problems. Enjoy the course!

    Read the article

  • SQLRally Nordic and SQLRally Amsterdam: Wrap Up and Demos

    - by Adam Machanic
    First and foremost : Huge thanks, and huge apologies, to everyone who attended my sessions at these events. I promised to post materials last week, and there is no good excuse for tardiness. My dog did not eat my computer. I don't have a dog. And if I did, she would far prefer a nice rib eye to a hard chunk of plastic. Now, on to the purpose of this post... Last week I was lucky enough to have a first visit to each of two amazing cities, Stockholm and Amsterdam. Both cities, as mentioned previously...(read more)

    Read the article

  • Telerik Releases the Data Service Wizard

    After a great beta cycle, Telerik is proud to announce today the commercial availability of the OpenAccess Data Service Wizard. You can download it and install it with Telerik OpenAccess Q1 2010 for both Visual Studio 2008 and 2010 RTM. If you are new to the Data Service Wizard, it is a great tool that will allow you to point a wizard at your OpenAccess generated data access classes and automatically build an WCF, Astoria (WCF Data Services), REST or ATOMPub collection endpoint, complete with the CRUD methods if applicable. If you are familiar with the Data Service Wizard already, there will be two new surprises in the release version. If you generated a domain model with the new OpenAccess Visual Entity Designer, you have only one file added to your project, mydomainmodel.rlinq for example. The first surprise of the new Data Service Wizard is that if you right click on the domain model in Visual Studio, ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • Recap of SQLSat #65

    - by RickHeiges
    Since the MVP Summit was this past week, I decided to head out to the West Coast a little earlier and attended SQLSat#65. I did not submit to speak at the conference, but I did help out some by introducing speakers in one of the rooms and a few other places where I could. I started out in a session by Scott Klein about SQLAzure. BTW, Microsoft now has a 30-day offer for SQL Azure where you do not need to provide Credit Card info. I then sat in for a while on Alan Hirt's Session on building a Cluster...(read more)

    Read the article

  • Back in Atlanta! Wed, Feb 9 2011

    - by KKline
    I always enjoy spending time with my friends from Atlanta, as well as meeting folks and making new friends. If you live in the Atlanta area, I hope you'll join me on the evening of Wednesday, February 9th, 2011. Details are at the Atlanta SQL Server user group website . It's common knowledge that I have a terrible memory for many things. However, one of the few things that my memory is usually really good at is remember names & faces (and remembering stories, but that is another story as well)....(read more)

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Speaking at Atlanta.MDF on March 12

    - by RickHeiges
    I am fortunate enough to be speaking to a user group with a really cool name - Atlanta.MDF (Microsoft Database Forum). Although I visit Atlanta often, it usually involves running from one councourse to another and rarely do I get the chance to visit the user group. I have made it to the user group on several occassions in the past, but it has been several years. This will be my first presentation to the group. I will be speaking about Database Consolidation - something I have been doing for years....(read more)

    Read the article

  • Recap - SQL Saturday 151 in Orlando

    - by KKline
    It's always a feel-good experience for me to return to SQL Saturday in Orlando, the place where SQL Saturdays were started by Andy Warren ( Twitter | Blog ). On this trip, I delivered a full-day, pre-conference seminar on Troubleshooting and Performance Tuning SQL Server. I also delivered a session on SQL Server Internals and Architecture to a totally packed house. For those of you who emailed me directly, here's the link for the special SQL Sentry offer . I got to attend the extended events session...(read more)

    Read the article

  • We Are SQLFamily

    - by AllenMWhite
    On Monday, Tom LaRock ( b /@sqlrockstar) presented his #MemeMonday topic as What #SQLFamily Means To Me . The #sqlfamily hash tag is a relatively new one, but is amazingly appropriate. I've been working with relational databases for almost 20 years, and for most of that time I've been the lone DBA. The only one to set things up, explain how things work, fix the problems, make it go faster, etc., etc., yadda, yadda, yadda. I enjoy being 'the guy', but at the same time it gets hard. What if I'm wrong?...(read more)

    Read the article

  • Big Data Appliance

    - by David Dorf
    Today Oracle announced the next release of it's Big Data Appliance, an engineered system composed of hardware and software targeting the efficient processing of big data.  The solution leverages 288 Intel cores running Cloudera's distribution of Apache Hadoop in 1.1 TB of main memory.  This monster helps companies acquire, organize, and analyze large volumes of structured and un-structured data. Additionally a new versions of the Oracle Big Data Connectors and Oracle NoSQL Database were released. Why is this important to retailers?  As the infographic below conveys, mobile and social have added even more data to the already huge collections of POS transactions and e-commerce weblogs.  Retailers know that mining that data will help them make better decisions that lead to increased sales, better customer service, and ultimately a successful retail business. Monetate

    Read the article

  • Microsoft launches two new Data Centres for Azure in US to meet growing demand

    - by Gopinath
    In order to meet the growing demand for Windows Azure in US, Microsoft has launched two new data centres in US – East US and West US. With the addition of these two data centres the number of Azure data centres across the globe has grown to 8 and 4 among them are located in US. The two new data centres are providing Computer and Storage resources and few enthusiastic customers already deployed their applications. The other services like SQL Azure and AppFabric will be offered by these data centres in the coming months. The addition of new data centres is a good sign to Microsoft as the customer demand for their Cloud offering is growing. Amazon Web Services is the pioneer in Cloud Computing and they offer wider range of Cloud Services compared to Microsoft. Source: Windows Azure Blog

    Read the article

  • Speaking at Atlanta.MDF on March 12

    - by RickHeiges
    I am fortunate enough to be speaking to a user group with a really cool name - Atlanta.MDF (Microsoft Database Forum). Although I visit Atlanta often, it usually involves running from one councourse to another and rarely do I get the chance to visit the user group. I have made it to the user group on several occassions in the past, but it has been several years. This will be my first presentation to the group. I will be speaking about Database Consolidation - something I have been doing for years....(read more)

    Read the article

  • SQL Rally Presentations

    - by AllenMWhite
    As I drove to Dallas for this year's SQL Rally conference (yes, I like to drive) I got a call asking if I could step in for another presenter who had to cancel at the last minute. Life happens, and it's best to be flexible, and I said sure, I can do that. Which presentation would you like me to do? (I'd submitted a few presentations, so it wasn't a problem.) So yesterday I presented "Gathering Performance Metrics With PowerShell" at 8:45AM, and my newest presentation, "Manage SQL Server 2012 on Windows...(read more)

    Read the article

  • Calling home, receiving calls and smartphone data from the US

    - by Rob Farley
    I got asked about calling home from the US, by someone going to the PASS Summit. I found myself thinking “there should be a blog post about this”... The easiest way to phone home is Skype - no question. Use WiFi, and if you’re calling someone who has Skype on their phone at the other end, it’s free. Even if they don’t, it’s still pretty good price-wise. The PASS Summit conference centre has good WiFI, as do the hotels, and plenty of other places (like Starbucks). But if you’re used to having data all the time, particularly when you’re walking from one place to another, then you’ll want a sim card. This also lets you receive calls more easily, not just solving your data problem. You’ll need to make sure your phone isn’t locked to your local network – get that sorted before you leave. It’s no trouble to drop by a T-mobile or AT&T store and getting a prepaid sim. You can’t get one from the airport, but if the PASS Summit is your first stop, there’s a T-mobile store on 6th in Seattle between Pine & Pike, so you can see it from the Sheraton hotel if that’s where you’re staying. AT&T isn’t far away either. But – there’s an extra step that you should be aware of. If you talk to one of these US telcos, you’ll probably (hopefully I’m wrong, but this is how it was for me recently) be told that their prepaid sims don’t work in smartphones. And they’re right – the APN gets detected and stops the data from working. But luckily, Apple (and others) have provided information about how to change the APN, which has been used by a company based in New Zealand to let you get your phone working. Basically, you send your phone browser to http://unlockit.co.nz and follow the prompts. But do this from a WiFi place somewhere, because you won’t have data access until after you’ve sorted this out... Oh, and if you get a prepaid sim with “unlimited data”, you will still need to get a Data Feature for it. And just for the record – this is WAY easier if you’re going to the UK. I dropped into a T-mobile shop there, and bought a prepaid sim card for five quid, which gave me 250MB data and some (but not much) call credit. In Australia it’s even easier, because you can buy data-enabled sim cards that work in smartphones from the airport when you arrive. I think having access to data really helps you feel at home in a different place. It means you can pull up maps, see what your friends are doing, and more. Hopefully this post helps, but feel free to post comments with extra information if you have it. @rob_farley

    Read the article

  • SQLPass NomCom election: Why I voted twice

    - by Hugo Kornelis
    Did you already cast your votes for the SQLPass NomCom election ? If not, you really should! Your vote can make a difference, so don’t let it go to waste. The NomCom is the group of people that prepares the elections for the SQLPass Board of Directors. With the current election procedures, their opinion carries a lot of weight. They can reject applications, and the order in which they present candidates can be considered a voting advice. So use care when casting your votes – you are giving a lot...(read more)

    Read the article

  • Testing Reference Data Mappings

    - by Michael Stephenson
    Background Mapping reference data is one of the common scenarios in BizTalk development and its usually a bit of a pain when you need to manage a lot of reference data whether it be through the BizTalk Cross Referencing features or some kind of custom solution. I have seen many cases where only a couple of the mapping conditions are ever tested. Approach As usual I like to see these things tested in isolation before you start using them in your BizTalk maps so you know your mapping functions are working as expected. This approach can be used for almost all of your reference data type mapping functions where you can take advantage of MSTests data driven tests to test lots of conditions without having to write millions of tests. Walk Through Rather than go into the details of this here, I'm going to call out to one of my colleagues who wrote a nice little walk through about using data driven tests a while back. Check out Callum's blog: http://callumhibbert.blogspot.com/2009/07/data-driven-tests-with-mstest.html

    Read the article

  • SQL Saturday #274 Slovenia

    - by Dejan Sarka
    Yes, here it is SQL Saturday #274 is coming to Slovenia (#sqlsatSlovenia). The event will take place on Saturday, December 21st, at company pixi* labs, Informacijske tehnologije, d.o.o. Poslovna cona A 2 SI-4208 Šencur This company generously offered to host the event. We, the whole Slovenian SQL Server community, are very grateful for this. At this time, a call for speakers went out, and we are already getting the first proposals. We are especially happy that we will get possibility to show the foreign speakers how beautiful Slovenia and especially the capital Ljubljana is in December. Expect a lot of partying right on the streets, no matter of weather. Be prepared, we have slightly weird customs when it comes to drinks. For example, our regular special discount offer is not three drinks for the price of two; it is six drinks for the price of five. If you are a speaker or want to become one, consider sending a proposal. Since most of the sessions will be held in English and you don’t want to speak, consider coming as a visitor as well. Or maybe you would be interested to become a sponsor. Although we are targeting a low budgeted event, any kind of sponsorship is very welcome. Please feel free to contact the organizers if you are interested to become a sponsor: Matija Lah – [email protected], Mladen Prajdic - [email protected], or Dejan Sarka  - [email protected]. Looking forward to see you all!

    Read the article

  • How is intermediate data organized in MapReduce?

    - by Pedro Cattori
    From what I understand, each mapper outputs an intermediate file. The intermediate data (data contained in each intermediate file) is then sorted by key. Then, a reducer is assigned a key by the master. The reducer reads from the intermediate file containing the key and then calls reduce using the data it has read. But in detail, how is the intermediate data organized? Can a data corresponding to a key be held in multiple intermediate files? What happens when there is too much data corresponding to one key to be held by a single file? In short, how do intermediate partitions differ from intermediate files and how are these differences dealt with in the implementation?

    Read the article

  • Participating in 3 SQL events in 8 days

    - by AaronBertrand
    Well, 8 days not including lead and lag travel time. A quick summary of the three events, and the flights it took to get me to each: SQL Saturday #105 - Dublin, Ireland Flights on March 21st: Providence -> Philadelphia (236 miles) Philadelphia -> Dublin (3,260 miles) Time zone change: +4 when we got there, plus Daylight Saving Time kicked in, so +5 after the event. I spoke at this event, and manned the SQL Sentry booth with cohort Scott Fallen. This event was a fantastic SQL Saturday - very...(read more)

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >