Search Results

Search found 16924 results on 677 pages for 'oracle technical'.

Page 515/677 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Data Governance (Veri Yönetisimi)

    - by Arda Eralp
    Data governance,veri ile ilgili islemler için bir sorumluluklar sistemidir. Bu sistemin temelini ise politikalar, standartlar ve prosedürler olusturur. Sistem politikalar, standartlar ve prosedürler sayesinde verinin ne zaman, hangi sartlar altinda, hangi eylemlerde, hangi yöntemler ile kimler tarafindan kullanilacagina karar verir. Sistemin kurumda basarili bir sekilde islemesi için öncelikle kurumda farkindalik saglanmasi gereklidir. Farkindalik saglandiktan sonra ise kurum governance ve mimari kültürünü benimsemelidir. Ancak bu sartlar altinda sistem basarili bir sekilde isleyebilecektir. Bu sebeplerden dolayidir ki data governance kisa bir süreç degil, aksine kurum varligini sürdürdügü sürece isleyecek olan bir süreçtir. Bu durum bize data governance in bir proje degil bir program oldugunu açiklamaktadir. Programin baslangicinda kurumun ihtiyaçlarinin netlesmesi ve farkindaligin saglanmasi temeldir. Hedef kitle ise, veri ile dogrudan ve ya dolayli olarak iliski içerisinde olan herkesdir. Bu sebeple programin baslangicinda hedef kitleyi içeren ekipler ile toplantilar düzenlenecektir. Bu toplantilar sayesinde hem farkindalik saglanacak hemde ekiplerin ihtiyaçlari birebir ekipler tarafindan aktarilarak netlesecektir. Hedef kitlenin ihtiyaçlari netlestirildikten sonra ise devamli isleyecek olan bu sürecin planlamasi yapilacaktir. Bu sürecin planlanmasinda ihtiyaçlarin önceliklendirilmesi gerekmektedir. Sebebi ise her ekibin ihtiyaçlarinin farkli olabilecegi ve bütün ihtiyaçlara ayni anda karsilik verilemeyebileceginin öngörülmesidir. Bu öngörünün temeli ise ekiplerin ihtiyaçlarinin birbirleriyle olan baglantisidir. Süreç planlamasinda ihtiyaçlarin önceliklendirilmesinin ardindan kurumun büyüklügünün gözönünde bulundurulmasi gerekmedir. Kurumun büyüklügünün önemi ise eger kurum bir bütün olarak ayni anda govern edilemeyecek kadar büyük ise ihtiyaçlari öncelikli olarak bulunan ekipler ile govern edilmesine baslanarak sürecin belirli bir hiz ile bütün kurumda isler hale getirilmesini saglamaktir. Ihtiyaçlar belirlendikten ve ilgili ekipler seçildikten sonra artik programin planlanmasina geçilebilecek. Programin planlama asamasinda öncelikli olarak sürecin asamalarini kontrol edecek ve süreç kurum içerisinde isleyise geçtiginde kontrolü saglayacak olan Data Governance Office in planlanmasidir. Office in planlanmasiyla birlikte süreçteki roller ve bu rollerin sorumluluklari belirlenecektir. Planlama asamasinda Data governance office, roller ve sorumluluklar, güvenlik ve veri saklanan sistemler ele alinacak konulardir. Planlama asamasi tamamlandiginda ise belirlenen ekipler ve ihtiyaçlar dogrultusunda programin isleyis asamasina geçilebilecektir. Isleyis kisminda ekibin ihtiyaçlari dogrultusunda güvenlik konusunda ve veri saklanan sistemler üzerinde çalismalar yapilacaktir. Bu yapilan çalismalar bir süreç olarak dökümante edilecek ve süreç sona erdiginde baska bir ekiple baska bir ihtiyaç dogrultusunda çalisma yapilarak ayni süreç isletilecek ve böylece kurum içesinde ilgili süreçte standartlasma saglanacaktir. Güvenlik konusunda verinin erisim güvenligi ve kullanim güvenligi ele alinacaktir. Veri saklanan sistemler üzerindeki çalismalar ise saklanan sistemlerin program dahilinde belirlenen standartlar ile olusturulmasi ve yönetilmesi saglanacaktir. Isleyis kisminin ardindan ise programin izleme kismina geçilecektir. Bu kisimda artik Data Governance Office olusmus, politikalar, standartlar ve prosedürler belirlenmistir. Ve Data Governance Office çalisanlari rolleri ve sorumluluklari dahilinde programin isleyisini izleyecek ve gerek gördügünde politikalar standartlar ve prosedürler üzerinde degisiklikler yapacaklardir.

    Read the article

  • copy & paste in VirtualBox remote console when running headless

    - by katsumii
    One can run VirtualBox guest in "headless" mode and access it using Remote Destkop Protocol(RDP) client.This is typical when VBox server is installed on Linux/Solaris where X-window stuff is not installed and users useWindows to access the VM.So, one can install OS into VBox guests using Remote Desktop client.(e.g. mstsc.exe)Here's the setting. One lesser known feature here is that you can copy&paste into and out-of VM guest and your client.Apparently, "VMware Workstation" still doesn't support it. VMware Workstation Documentation CenterYou cannot copy and paste text between the host system and the guest operating system   

    Read the article

  • Kronos Workforce Mobile Apps (w/Java ME tech) lets bosses and staff work better

    - by hinkmond
    The Kronos Workforce Mobile apps let bosses spy on their workers, and let workers do what workers do best (uh, you know, work?), all using Java ME technology. See: Enable your Mobile Workforce w/Kronos Here's a quote: Kronos® Workforce Mobile™ Manager – allows managers to use their devices to monitor workforce operations, resolve exceptions, and respond quickly to employee requests. Kronos Workforce Mobile Employee – enables employees to track their work in real time, quickly and easily review information such as their schedules and timecards, and request time off. Kronos mobile applications are delivered as native applications for [blah-blah-blah]. A JavaME option is also available, which runs on a wide range of feature phones. Good stuff for the enterprise. Java ME technology helps run the mobile enterprise. I like that. Kinda catchy... Hinkmond

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • Building Queries Systematically

    - by Jeremy Smyth
    The SQL language is a bit like a toolkit for data. It consists of lots of little fiddly bits of syntax that, taken together, allow you to build complex edifices and return powerful results. For the uninitiated, the many tools can be quite confusing, and it's sometimes difficult to decide how to go about the process of building non-trivial queries, that is, queries that are more than a simple SELECT a, b FROM c; A System for Building Queries When you're building queries, you could use a system like the following:  Decide which fields contain the values you want to use in our output, and how you wish to alias those fields Values you want to see in your output Values you want to use in calculations . For example, to calculate margin on a product, you could calculate price - cost and give it the alias margin. Values you want to filter with. For example, you might only want to see products that weigh more than 2Kg or that are blue. The weight or colour columns could contain that information. Values you want to order by. For example you might want the most expensive products first, and the least last. You could use the price column in descending order to achieve that. Assuming the fields you've picked in point 1 are in multiple tables, find the connections between those tables Look for relationships between tables and identify the columns that implement those relationships. For example, The Orders table could have a CustomerID field referencing the same column in the Customers table. Sometimes the problem doesn't use relationships but rests on a different field; sometimes the query is looking for a coincidence of fact rather than a foreign key constraint. For example you might have sales representatives who live in the same state as a customer; this information is normally not used in relationships, but if your query is for organizing events where sales representatives meet customers, it's useful in that query. In such a case you would record the names of columns at either end of such a connection. Sometimes relationships require a bridge, a junction table that wasn't identified in point 1 above but is needed to connect tables you need; these are used in "many-to-many relationships". In these cases you need to record the columns in each table that connect to similar columns in other tables. Construct a join or series of joins using the fields and tables identified in point 2 above. This becomes your FROM clause. Filter using some of the fields in point 1 above. This becomes your WHERE clause. Construct an ORDER BY clause using values from point 1 above that are relevant to the desired order of the output rows. Project the result using the remainder of the fields in point 1 above. This becomes your SELECT clause. A Worked Example   Let's say you want to query the world database to find a list of countries (with their capitals) and the change in GNP, using the difference between the GNP and GNPOld columns, and that you only want to see results for countries with a population greater than 100,000,000. Using the system described above, we could do the following:  The Country.Name and City.Name columns contain the name of the country and city respectively.  The change in GNP comes from the calculation GNP - GNPOld. Both those columns are in the Country table. This calculation is also used to order the output, in descending order To see only countries with a population greater than 100,000,000, you need the Population field of the Country table. There is also a Population field in the City table, so you'll need to specify the table name to disambiguate. You can also represent a number like 100 million as 100e6 instead of 100000000 to make it easier to read. Because the fields come from the Country and City tables, you'll need to join them. There are two relationships between these tables: Each city is hosted within a country, and the city's CountryCode column identifies that country. Also, each country has a capital city, whose ID is contained within the country's Capital column. This latter relationship is the one to use, so the relevant columns and the condition that uses them is represented by the following FROM clause:  FROM Country JOIN City ON Country.Capital = City.ID The statement should only return countries with a population greater than 100,000,000. Country.Population is the relevant column, so the WHERE clause becomes:  WHERE Country.Population > 100e6  To sort the result set in reverse order of difference in GNP, you could use either the calculation, or the position in the output (it's the third column): ORDER BY GNP - GNPOld or ORDER BY 3 Finally, project the columns you wish to see by constructing the SELECT clause: SELECT Country.Name AS Country, City.Name AS Capital,        GNP - GNPOld AS `Difference in GNP`  The whole statement ends up looking like this:  mysql> SELECT Country.Name AS Country, City.Name AS Capital, -> GNP - GNPOld AS `Difference in GNP` -> FROM Country JOIN City ON Country.Capital = City.ID -> WHERE Country.Population > 100e6 -> ORDER BY 3 DESC; +--------------------+------------+-------------------+ | Country            | Capital    | Difference in GNP | +--------------------+------------+-------------------+ | United States | Washington | 399800.00 | | China | Peking | 64549.00 | | India | New Delhi | 16542.00 | | Nigeria | Abuja | 7084.00 | | Pakistan | Islamabad | 2740.00 | | Bangladesh | Dhaka | 886.00 | | Brazil | Brasília | -27369.00 | | Indonesia | Jakarta | -130020.00 | | Russian Federation | Moscow | -166381.00 | | Japan | Tokyo | -405596.00 | +--------------------+------------+-------------------+ 10 rows in set (0.00 sec) Queries with Aggregates and GROUP BY While this system might work well for many queries, it doesn't cater for situations where you have complex summaries and aggregation. For aggregation, you'd start with choosing which columns to view in the output, but this time you'd construct them as aggregate expressions. For example, you could look at the average population, or the count of distinct regions.You could also perform more complex aggregations, such as the average of GNP per head of population calculated as AVG(GNP/Population). Having chosen the values to appear in the output, you must choose how to aggregate those values. A useful way to think about this is that every aggregate query is of the form X, Y per Z. The SELECT clause contains the expressions for X and Y, as already described, and Z becomes your GROUP BY clause. Ordinarily you would also include Z in the query so you see how you are grouping, so the output becomes Z, X, Y per Z.  As an example, consider the following, which shows a count of  countries and the average population per continent:  mysql> SELECT Continent, COUNT(Name), AVG(Population)     -> FROM Country     -> GROUP BY Continent; +---------------+-------------+-----------------+ | Continent     | COUNT(Name) | AVG(Population) | +---------------+-------------+-----------------+ | Asia          |          51 |   72647562.7451 | | Europe        |          46 |   15871186.9565 | | North America |          37 |   13053864.8649 | | Africa        |          58 |   13525431.0345 | | Oceania       |          28 |    1085755.3571 | | Antarctica    |           5 |          0.0000 | | South America |          14 |   24698571.4286 | +---------------+-------------+-----------------+ 7 rows in set (0.00 sec) In this case, X is the number of countries, Y is the average population, and Z is the continent. Of course, you could have more fields in the SELECT clause, and  more fields in the GROUP BY clause as you require. You would also normally alias columns to make the output more suited to your requirements. More Complex Queries  Queries can get considerably more interesting than this. You could also add joins and other expressions to your aggregate query, as in the earlier part of this post. You could have more complex conditions in the WHERE clause. Similarly, you could use queries such as these in subqueries of yet more complex super-queries. Each technique becomes another tool in your toolbox, until before you know it you're writing queries across 15 tables that take two pages to write out. But that's for another day...

    Read the article

  • java.net outage

    - by alexismp
    The GlassFish website has been down for a number of hours (together with a number of other projects) as a result of a general outage in the java.net datacenter. The team is working hard on getting everything back to normal. You can track progress by following @ProjectKenai. Update: services should now all be back to normal. If you face java.net issues in the future, consider reporting them here. And now, back to work!

    Read the article

  • A Simple Solution For NetBeans RCP Apps That Need A Groovy Editor

    - by Geertjan
    Take a look at Nils Hoffmann's metabolomic analyzer, especially at the Groovy editor contained within it: Obviously, it would be cool if the Groovy editor in the app above were to have syntax coloring and other editor features helpful in coding Groovy. However, as I showed in If You Include the Groovy Editor, there are multiple dependencies that the NetBeans Groovy support has on other modules that would be completely superfluous in the above application, while they'd make the app much heavier than it is, simply because of all the Groovy dependencies. But today I thought of a simple solution. Why not take the Groovy.g file (i.e., the ANTLR definition), such as this one [though that's probably not the most up to date one, wondering how to find the most up to date one] and then apply the content of this screencast (made by me) to the Groovy.g file: Within a few minutes, you should end up with Groovy syntax coloring. OK, so that's not a full blown Groovy editor, but syntax coloring is surely a cool thing to have in the app with which this blog entry started? Sure, this means creating a new Groovy editor from scratch. But the point is that doing so can be very simple, i.e., the syntax coloring can simply be generated via the simple instructions above. I'm going to try it myself in the next few days, but would be cool if others out there would try this too!

    Read the article

  • YouTube: Promotional AgroSense Movie

    - by Geertjan
    Here's a cool YouTube promotional movie on AgroSense created by Ordina in the Netherlands. AgroSense is an open source Java system for the precision agriculture industry, which won the IT Environment Award in the Netherlands last week: If your understanding of Dutch limits your appreciation of the movie above, here's a rough translation, together with the names of the speakers in the movie: Precision agriculture, an innovative form of agriculture in which local variations in soil, crop, and atmosphere are taken into account, is the high-tech sustainable agriculture of tomorrow. The use of fertilizer, water, and energy can in this way be significantly reduced. "If, ten or twenty years from now, we are to continue having our agricultural industry in good shape, and in a continuing state of health, we'll need to register and work with data because if we want to enable crops to provide higher value, we'll need to create higher levels of transparency throughout the agriculture chain." Lenus Hamster, farmer in Nieuwolda Groningen "Industry is becoming increasingly data intensive. By combining pragmatic usefulness with innovative sustainability, AgroSense offers the Netherlands the possibility to continue being a leading player in the agrofood sector." Art Lighthart, Architect at Ordina AgroSense offers an open source solution in which all services for precision agriculture are brought together. In 2012, co-operation is being sought with organizations to make AgroSense available to around 10,000 Dutch farmers in the arable crop sector. By the way, the last sentence above implies the NetBeans Platform will be used by around 10,000 Dutch farmers.

    Read the article

  • Pensez à notre site de web tv pour les partenaires !

    - by swalker
    Nous souhaitons vous remercier chaleureusement pour votre forte mobilisation autour du projet « Mini Studio ITPlace.tv ». Merci d’avoir répondu présent ! 14 Interviews Partenaires Des IA Channel dignes des plus grands journalistes du 20h Des partenaires ravis de pouvoir s’exprimer Des contenus de valeurs autour de nombreuses thématiques Un remerciement tout spécial pour Loïc qui a su mobiliser 5 de ses partenaires autour de thématiques que nous avons peu l’habitude d’adresser (solutions cross Apps/Tech, Sécurité, etc…). Nous reviendrons vers vous rapidement pour vous présenter les montages vidéos finaux, et voir avec vos partenaires comment utiliser au mieux ces médias. Pour vous connecter : http://www.itplace.tv

    Read the article

  • Be There: Tinkerforge/NetBeans Platform Integration Course

    - by Geertjan
    Tinkerforge is an electronic construction kit. It exposes a number of API bindings, including, of course, Java. The nice thing also is that Tinkerforge products are open source, both on the hardware and software levels, so that you can take their bases as a starting point for your own modifications. "The TinkerForge system is a set of pre-built electronics boards that are built in such a way that you can stack the boards (known as bricks), attach accessories (known as bricklets), and have your prototype and and running quickly. Unlike systems, such as the Arduino or Launchpad, the TinkerForge has to be attached to a computer and the computer does all of the work. With an easy set of application programming interfaces (APIs) available in C/C++, C#, Java, PHP, and Ruby, the system is easy to interface and program over USB in a snap." (from this useful article) Henning Krüp, who has arranged several NetBeans Platform Certified Training Courses in the past, in the Nordhorn/Lingen area in Germany, had the inspired idea to focus the next course on integration with Tinkerforge. In other words, the whole course will be focused on creating a standalone Java desktop application that leverages the NetBeans Platform to interact with Tinkerforge! Interested in joining the course or setting up something similar yourself? The course organized by Henning will be held from 19 to 21 September, as explained here, together with contact details.  If you'd like to organize a similar course at a location of your choosing, leave a comment at the end of this blog entry and we'll set something up together!

    Read the article

  • 2013 EC Elections Results

    - by Heather VanCura
    The 2013 Fall Executive Committee (EC) Elections process is now complete.  Congratulations to the following JCP Members as the new and re-elected EC Members!   We had a slight increase in JCP Member voter turnout at ~25% (up from 24% in 2012).  All Ratified candidates and the top eight Elected candidates were elected by the JCP Membership.  As part of the transition to a merged EC, Members elected in 2013 are ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term (details below). Ratified Seats: Credit Suisse, Ericsson, Freescale, Fujitsu, Gemalto M2M, Goldman Sachs, Hewlett-Packard, IBM, Intel, Nokia, Red Hat, SAP, SouJava, Software AG, TOTVS and V2COM. Open Election Seats: Eclipse Foundation, Twitter, London Java Community, CloudBees, ARM, Azul Systems, Werner Keil and MoroccoJUG. Newly elected EC Members take their seats on Tuesday, 12 November 2013.  More information is available on the JCP Elections page. Detailed Election Results Voting Period: 15 - 28 October 2013. Number of Eligible Voters: 1088 Percent of Eligible Members Casting Votes: 24.77% Ratified Seats: Candidate Yes Votes (%) No Votes (%) Abstentions Credit Suisse (2year term) 196 (84) 38 (16) 36 Ericsson (2 year term) 196 (88) 27 (12) 47 Freescale (1 year term) 151 (74) 53 (26) 66 Fujitsu (2 year term) 194 (87) 29 (13) 47 Gemalto M2M (1 year term) 170 (80) 42 (20) 58 Goldman Sachs (1 year term) 143 (64) 80 (36) 47 Hewlett-Packard (2 year term) 191 (82) 43 (18) 36 IBM (2 year term) 226 (91) 22 (9) 22 Intel (2 year term) 214 (90) 24 (10) 32 Nokia (1 year term) 139 (64) 78 (36) 53 Red Hat (2 year term) 245 (95) 12 (5) 13 SAP (1 year term) 166 (75) 56 (25) 48 SouJava (2 year term) 226 (92) 19 (8) 25 Software AG (1 year term) 167 (78) 47 (22) 56 TOTVS (1 year term) 129 (69) 59 (31) 82 V2COM (1 year term) 135 (71) 54 (29) 81 Open Election Seats: The top eight candidates have been elected; the top four receive a two year term, and the next four receive a one year term. Candidate Votes (%) Eclipse Foundation (2 year term) 221 (14) Twitter (2 year term) 203 (13) London Java Community (2 year term) 191 (12) CloudBees (2 year term) 179 (11) ARM (1 year term) 176 (11) Azul Systems (1 year term) 166 (10) Werner Keil (1 year term) 128 (8) MoroccoJUG (1 year term) 93 (6) Karan Malhi 56 (3) ChinaNanjingJUG 51 (3) JUG Joglosemar 47 (3) Viresh Wali 45 (3) ITP_JAVA 44 (3) None of the Above 3 (0)

    Read the article

  • MIX 2010 Covert Operations Day 3

    - by GeekAgilistMercenary
    I rolled over to the Mandalay for breakfast.  There I met a couple guys that were really excited about the new Windows 7 Phone.  They, as I, are also hopeful that the phone really gets a big push and some penetration into the market.  Not because we don’t like any other of the phones, but because this phone is so much better in many ways.  From a developer's perspective creating applications in Windows 7 Mobile will be vastly superior in ease, capabilities, and other aspects.  The architectural, existing code base, examples, and provisions to create things on the 7 Mobile Device are already existing as of RIGHT NOW.  There is no reason, except for fickle market conditions, for this phone to not just explode onto the market.  But alas, I won't hold my breath. Day three keynote had a whole new slew of things provided.  It also seemed that things got a lot more technical on this second keynote.  The oData was one of the very technical bits, yet it included almost no code.  Starting with a Netflix example and all the way to the Codename "Dallas" effort the oData Services provide some expansive possibilities. A mash up going 4 ways was then shown for finding a movie, finding local places to have a viewing, and information about the movie and were to prospectively find and buy additional movie bits.  The display was of course, in a Windows 7 Mobile device with literally a click to view each set of data.  The backend and the front end of this was beautifully smooth. The Dallas Project has a lot of potential for analytics in dashboard and scorecard creation also.  If there is a need or reason to provide data to a vast and wide range of clients, Dallas is a prime example of how to do that. Azure Clouds After the main keynote I checked out (while developing a working WPF & Silverlight Application for work) the session on deploying ASP.NET Applications, services, etc, into the cloud.  The session was pretty good, but I'll admit I got a little unfocused from it a few times.  It is after all hard to do two things at one time. I did take note that the cloud still is a multiple step process for deploying to.  This is a good thing and a bad thing.  There needs to be more checks and verifications when deploying something into the cloud just for technical reasons.  However, I feel that there should be some streamlining to the process.  Going back and forth between web and Visual Studio as the interface also seems kind of clunky.  Deployment should be able to be completed from within Visual Studio in my perspective.  Overall, the cloud is getting more and more impressive in function as well as theory. That's it from me so far on the third day of MIX.  I'll be note taking and studying hard to have more good tidbits to provide. Thanks for reading, if you're curious about more of my writing, check out this original entry at my other blog Agilist Mercenary.

    Read the article

  • Megjelent a Glassfish 3.1

    - by peter.nagy
    Mivel ezzel van tele a blogvilág és a java valamint open source community oldalak többsége nem is mennék bele a részletekbe. De akinek információra van szüksége, mindenféleképpen Arun Gupta blogját ajánlom kiindulási pontként. És az o blogja szerintem ebben a témában jobb kezdolap is, mint egy google keresés. Azért ami a legfontosabb: megjelent a clusterezés, Alább pedig a letöltheto csomagok összetevoi.

    Read the article

  • Demantra Partitioning and the First PK Column

    - by user702295
      We have found that it is necessary in Demantra to have an index that matches the partition key, although it does not have to be the PK.  It is ok   to create a new index instead of changing the PK.   For example, if my PK on SALES_DATA is (ITEM_ID, LOCATION_ID, SALES_DATE) and I decide partition by SALES_DATE, then I should add an index starting   with the partition key like this: (SALES_DATE, ITEM_ID, LOCATION_ID).   * Note that the first column of the new index matches the partition key.   It might also be helpful to create a 2nd index with the other PK columns reversed (SALES_DATE, LOCATION_ID, ITEM_ID). Again, the first column   matches the partition key.

    Read the article

  • Les Techn'Oracle Et Les Sales'Oracle Sont Là !

    - by swalker
    Les Techn'Oracle et les Sales'Oracle sont lancées en région. Comme chaque année, plusieurs centaines de collaborateurs de nos Partenaires vont se former sur nos offres et nos produits les plus récents ! Les Techn'Oracle visent à présenter le contenu technique de nos produits, et leurs avantages pour les Clients, en terme d'architecture, de TCO, de flexibilité pour les Systèmes d'Information. Les Sales'Oracle ont pour objet de montrer la valeur de nos offres en termes Clients, le "à quoi ça sert" pour permettre aux Partenaires d'augmenter leur présence et leurs revenus chez nos Clients >> Rendez-vous sur le Calendrier pour les dates et les thèmes ici

    Read the article

  • Mobile Shopping Alerts

    - by David Dorf
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} It’s been popular to offer coupons when people check-in to a store, because you’re catching them at the best possible time – they’re presumably in a shopping state-of-mind, and they’re at your store.  But wouldn’t it be even better to catch the people walking by your store and entice them to visit?  That’s the concept of geo-fences.  When people enter a geographic zone, they are sent a relevant text message alerting them about something nearby. I wrote about Placecast doing this for The North Face, noting that the messages were a unique combination of both offers and useful information about outdoor activities. After creating a program with European carrier O2, Placecast recently entered into an agreement to provide similar services to AT&T customers.  The ShopAlerts program allows AT&T customers in Chicago, Los Angeles, New York City, and San Francisco opt-in to receive these messages.  The program will be expanded nationwide as early as this summer. It’s a much better model for customers (and Placecast) to sign-up once with the carrier instead of each individual retailer, but I hope the messages aren’t restricted to advertising.  I really the like the idea of providing other information, such as nearby special events, races, and perhaps even things to avoid like construction.

    Read the article

  • Unlock the Java EE 6 Platform using NetBeans 7.1

    - by arungupta
    NetBeans IDE provide tools, templates, and code generators that can be used for the specifications that are part of the Java EE 6 Platform. In a recent article Geertjan builds a simple end-to-end application using the standard Model-View-Controller architecture. It uses Java Persistence API 2, Servlets 3, JavaServer Faces 2, Enterprise Java Beans 3.1, Context and Dependency Injection 1.0, and Java API for RESTful Web Services 1.1 showing the complete stack. A self-paced and an extensive hands-on lab covering this article and much more is also available here. A video (47-minutes) explaining how to build a similar application can be viewed here.

    Read the article

  • Neuerungen bei der Spezialisierung von VADs

    - by swalker
    Ab 1. November 2011 müssen VADs (Value Added Distributors) mit einer gültigen VAD-Vereinbarung für eine Spezialisierung nicht mehr die Kundenreferenz-Anforderungen erfüllen, die im Abschnitt zu den Geschäftskriterien aufgeführt sind. Die VADs müssen jedoch auch weiterhin alle Geschäfts- und Kompetenzkriterien in der entsprechenden Knowledge Zone erfüllen, bevor Ihre Spezialisierung anerkannt wird.

    Read the article

  • OBIEE 11g recommended patch sets

    - by THE
     Martin has busied himself to combine the recommended patch sets for OBIEE 11g into one single useful KM note.(This one contains the recommendations for 11.1.1.5 as well as those for 11.1.1.6) OBIEE 11g: Required and Recommended Patches and Patch Sets (Doc ID 1488475.1) So if you are looking for update/patch information for your OBIEE installation - this is most likely a useful stop. And as patching is an ongoing process you may want to bookmark this KM doc, as I am sure Martin will keep this current as new patches come out. Oh - and if you are looking for upgrade information from 11.1.1.5 to 11.1.1.6, KM Doc ID 1434253.1 might just be the thing you are looking for.

    Read the article

  • Hot-hot-hot for jobs in Java and mobile software

    - by hinkmond
    It's hot-hot-hot! The market for Java and mobile developers keeps growing hotter, and hotter--so says the latest Dice survey. See: Dice survey says Java & Mobile are tops Here's a quote: The market for mobile developers is expanding faster than the talent pool can adapt, a Dice survey indicates. Software developers in general—as well as Java, mobile software and Microsoft .Net developers in particular—are in short supply today. Those fields represent four of the top five most difficult positions IT managers are looking to fill... ... The New York/New Jersey metro area led the country with 8,871 positions listed... So, if you are looking to get into software development get crackin' in learning Java mobile programming and move to NY or NJ. Let's go Mets! Hinkmond

    Read the article

  • Recommended: git-completion.bash

    - by andy.grover
    If you use git on a daily basis like I do, git-completion.bash is a great way to make your life a little easier. While I guess it does add tab-completion for git commands, the most useful feature for me is the ability to put the current branch into the cmdline prompt. Now that I am comfortable working with multiple git branches and remotes, a little reminder where I am prevents time-consuming mistakes. git-completion.bash lives in git's git tree.git clone git://git.kernel.org/pub/scm/git/git.gitcopy git/contrib/completion/git-completion.bash to ~/.git-completion.shFollow the instructions in the file to set up, and enable showing branch in $PS1I also use this alias in my ~/.gitconfig, which is convenient:[alias]        log1 = log --pretty=oneline --abbrev-commitHave fun!

    Read the article

  • To encryption=on or encryption=off a simple ZFS Crypto demo

    - by darrenm
    I've just been asked twice this week how I would demonstrate ZFS encryption really is encrypting the data on disk.  It needs to be really simple and the target isn't forensics or cryptanalysis just a quick demo to show the before and after. I usually do this small demo using a pool based on files so I can run strings(1) on the "disks" that make up the pool. The demo will work with real disks too but it will take a lot longer (how much longer depends on the size of your disks).  The file hamlet.txt is this one from gutenberg.org # mkfile 64m /tmp/pool1_file # zpool create clear_pool /tmp/pool1_file # cp hamlet.txt /clear_pool # grep -i hamlet /clear_pool/hamlet.txt | wc -l Note the number of times hamlet appears # zpool export clear_pool # strings /tmp/pool1_file | grep -i hamlet | wc -l Note the number of times hamlet appears on disk - it is 2 more because the file is called hamlet.txt and file names are in the clear as well and we keep at least two copies of metadata. Now lets encrypt the file systems in the pool. Note you MUST use a new pool file don't reuse the one from above. # mkfile 64m /tmp/pool2_file # zpool create -O encryption=on enc_pool /tmp/pool2_file Enter passphrase for 'enc_pool': Enter again: # cp hamlet.txt /enc_pool # grep -i hamlet /enc_pool/hamlet.txt | wc -l Note the number of times hamlet appears is the same as before # zpool export enc_pool # strings /tmp/pool2_file | grep -i hamlet | wc -l Note the word hamlet doesn't appear at all! As a said above this isn't indended as "proof" that ZFS does encryption properly just as a quick to do demo.

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >