Search Results

Search found 37101 results on 1485 pages for 'array based'.

Page 544/1485 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • What is the best way to store anciliary data with a 2D timeseries object in R?

    - by Mike52
    I currently try to move from matlab to R. I have 2D measurements, consisting of irradiance in time and wavelength together with quality flags and uncertainty and error estimates. In Matlab I extended the timeseries object to store both the wavelength array and the auxiliary data. What is the best way in R to store this data? Ideally I would like this data to be stored together such that e.g. window(...) keeps all data synchronized.

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • Tellago Technology Days: Enterprise Mobile Backend as a Service

    - by gsusx
    Last week, as part of Tellago's Technology Update, I delivered a presentation about the modern enterprise mobility powered by cloud-based, mobile backend as a service models. During the presentation we covered some of the most common enterprise mBaaS patterns that can be implemented using current technologies. Below you can find the slide deck I used during the presentation. Feel free to take a look and send me some feedbck....(read more)

    Read the article

  • Map building - Tower Defense

    - by Dan K
    Before diving too deep into my question, let it be known that I am learning as far as java script goes and figured a simple Tower Defense game would be an excellent way to learn things. So I have found a simple background image with a path drawn on it and my question is how would I go about building a path so that I can animate my objects. Would I have to take the image and overlay a grid system, or can I store the path in some sort of array and have my objects move across it? Here is the background image:

    Read the article

  • Where do I create the file .htaccess, in order to serve my HTML5 cache manifest file correctly?

    - by Forrest
    From a post on http://diveintohtml5.org/offline.html (Wayback Machine Copy) Your cache manifest file can be located anywhere on your web server, but it must be served with the content type text/cache-manifest. If you are running an Apache-based web server, you can probably just put an AddType directive in the .htaccess file at the root of your web directory: AddType text/cache-manifest .manifest Where do I create the file .htaccess? Need some more setup with apachectl ? Thanks very much !

    Read the article

  • SEO for single-page content-less Web App

    - by brillout.com
    as written in the title the website on which I try to do Search Engine Optimization has following two properties: doesn't have any content in the SEO sense: it doesn't hold any information and only offers functionality consists of only one page/URL since most of the SEO tips/tricks I read are based on content how do I perform SEO optimization on such a website? for more info: the website is basically just a timer/alarm/stopwatch

    Read the article

  • SQL Server 2008 R2 Express Edition - a treat for small scale businesses

    - by ssqa.net
    SQL Server Express edition is a light-weight software within SQL Server arena, it is classed as database platform that makes it easy to develop data-driven applications that are rich in capability, offer enhanced storage security, and are fast to deploy. Also the SQL Server 2008 Express with Advanced Services is an edition of same flock that includes a new graphical management tool, features for reporting, and advanced text-based search capabilities. You can add the GUI capabilities for management...(read more)

    Read the article

  • Retrieving Custom Attributes Using Reflection

    - by Scott Dorman
    The .NET Framework allows you to easily add metadata to your classes by using attributes. These attributes can be ones that the .NET Framework already provides, of which there are over 300, or you can create your own. Using reflection, the ways to retrieve the custom attributes of a type are: System.Reflection.MemberInfo public abstract object[] GetCustomAttributes(bool inherit); public abstract object[] GetCustomAttributes(Type attributeType, bool inherit); public abstract bool IsDefined(Type attributeType, bool inherit); System.Attribute public static Attribute[] GetCustomAttributes(MemberInfo member, bool inherit); public static bool IsDefined(MemberInfo element, Type attributeType, bool inherit); If you take the following simple class hierarchy: public abstract class BaseClass { private bool result;   [DefaultValue(false)] public virtual bool SimpleProperty { get { return this.result; } set { this.result = value; } } }   public class DerivedClass : BaseClass { public override bool SimpleProperty { get { return true; } set { base.SimpleProperty = value; } } } Given a PropertyInfo object (which is derived from MemberInfo, and represents a propery in reflection), you might expect that these methods would return the same result. Unfortunately, that isn’t the case. The MemberInfo methods strictly reflect the metadata definitions, ignoring the inherit parameter and not searching the inheritance chain when used with a PropertyInfo, EventInfo, or ParameterInfo object. It also returns all custom attribute instances, including those that don’t inherit from System.Attribute. The Attribute methods are closer to the implied behavior of the language (and probably closer to what you would naturally expect). They do respect the inherit parameter for PropertyInfo, EventInfo, and ParameterInfo objects and search the implied inheritance chain defined by the associated methods (in this case, the property accessors). These methods also only return custom attributes that inherit from System.Attribute. This is a fairly subtle difference that can produce very unexpected results if you aren’t careful. For example, to retrieve the custom  attributes defined on SimpleProperty, you could use code similar to this: PropertyInfo info = typeof(DerivedClass).GetProperty("SimpleProperty"); var attributeList1 = info.GetCustomAttributes(typeof(DefaultValueAttribute), true)); var attributeList2 = Attribute.GetCustomAttributes(info, typeof(DefaultValueAttribute), true));   The attributeList1 array will be empty while the attributeList2 array will contain the attribute instance, as expected. Technorati Tags: Reflection,Custom Attributes,PropertyInfo

    Read the article

  • The PASS Board of Directors Q&A Session

    - by andyleonard
    Friday afternoon (18 Oct 2013), the PASS Board of Directors met with interested members of the SQL Server Community to answer questions. Paraphrases of some questions and notes I collected during the session follow (Please note: this is not a transcript): Elections Kendall Van Dyke asked about duplicate voting. The Board responded that they had looked into the matter and identified duplicate memberships based on names and addresses, but with different email addresses. After filtering for duplicate...(read more)

    Read the article

  • Distributed C++ game server which use database.

    - by Slav
    Hello. My C++ turn-based game server (which uses database) does stand against current average amount of clients (players), so I want to expand it to multiple (more then one) amount of computers and databases where all clients still will remain within single game world (servers will must communicate with each other and use multiple databases). Is there some tutorials/books/common standards which explain how to do it in a best way?

    Read the article

  • Integrating with Oracle Fusion Applications: Discovering Integration Artifacts

    - by Simone Geib
    Rajesh Raheja, software architect at Oracle, has recently posted the first of a series of blogs on the topic of integrating with Oracle Fusion Applications, which is the next generation of enterprise applications built on top of Oracle Fusion Middleware. His goal is to share the ease with which integrations are now possible using standards-based technologies with enterprise applications. You can find his full blog post here.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • MySQL Input Parameters Add Flexibility to Crosstab Stored Procedures

    When generating a result set where the query contains an unknown number of column and/or row values we can use a combination of Prepared Statements, which allows us to tailor the output based on the number of data values. We can also add input parameters to a procedure to assign the field names, aliases, and even the aggregate function!

    Read the article

  • MySQL Input Parameters Add Flexibility to Crosstab Stored Procedures

    When generating a result set where the query contains an unknown number of column and/or row values we can use a combination of Prepared Statements, which allows us to tailor the output based on the number of data values. We can also add input parameters to a procedure to assign the field names, aliases, and even the aggregate function!

    Read the article

  • Part 1 Basic Webtrends REST Examples

    - by GeekAgilistMercenary
    In this entry I just want to cover some examples of how to connect to Webtrends DX Web Services.  The DX Web Services use REST as the architecture, providing simple URI based end points to connect to.  With the Webtrends SDK you can connect to these services with your account information.  Here are the basic steps to retrieve a profile list, the reports from one of those profiles, and then the report you want from that report list. First step is to create a Webtrends User. WebTrends.Sdk.Account.User webtrendsUser = new Account.User(); webtrendsUser.UserName = username; webtrendsUser.Password = password; webtrendsUser.AccountName = account; After you create the Webtrends User, simple request a profile list by getting list of ProfileDefinition Objects. List<WebTrends.Sdk.Profile.ProfileDefinition> profiles = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(webtrendsUser); Next you will want to grab a report based on the profile you are in and your credentials. List<WebTrends.Sdk.Report.ReportDefinition> reports = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(profiles[i], webtrendsUser); In the code above, i would equate to the specific profile you want from the retrieved list of profiles in the profiles list.  The common scenario is that one has pulled the profiles into a drop down, combo, or list box that the user can select.  Then when the user selects the specific profile that profile object can then be used to pull the List of ReportDefinitions. Once we have the report definitions, all sorts of criteria can be added together to query for a specific report.  This is also were things can get a little tricky.  For instance, take a look at the code below. WebTrends.Sdk.Factory.ReportFactory.CreateDimensionalReport( report.ID.ToString(), profiles[i].ID.ToString(), "2010m01", webtrendsUser); The CreateDimensionalReport takes 4 parameters for this particular overload.  The report ID, profile ID, the Webtrends Date Format, and the Webtrends User Object.  There are a number of other overloads available within this factory's method that allow for passing the specific REST URI, and other criteria to retrieve the report of your choice.  In the near future we will be adding some more to this method also, which will provide more flexibility without needing to use the full REST URI. I will have more on this, so all you Coders out there using Webtrends DX Services, I hope this is helpful!  Enjoy. Original Entry

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • Managing OpenX campaigns for several advertisers

    - by Mauricio Scheffer
    We're running our own OpenX instance with several advertisers. Each month, the advertiser's campaign should reset its available impressions, based on whatever plan the advertiser paid. I think this is a pretty common scenario. Question is: how do you manage this? Do you expire the old campaign and create a new one with the new amount of impressions? Or do you just add the impressions to the existing campaign?

    Read the article

  • Looking for SQL 2008 R2 Training Resources

    - by NeilHambly
    Are you looking for some R2 Training Resources - then this would most likely keep you busy for a while digesting all the content http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=fffaad6a-0153-4d41-b289-a3ed1d637c0d SQL Server 2008 R2 Update for Developers Training Kit (April 2010 Update) it Contains the following Presentations (22) Demos (29) Hands-on Labs (18) Videos (35) SQL Server 2008 R2 offers an impressive array of capabilities for developers that build upon key innovations...(read more)

    Read the article

  • TomEE Integration in NetBeans Next

    - by Geertjan
    At JavaOne 2013, there was a lot of buzz around the TomEE server, e.g., many Tweets, nice party, and a new TomEE consulting company. For those tracking TomEE developments, it is interesting to note that recently the NetBeans IDE development builds have had added to them... TomEE support. Note: The TomEE support described here is not in NetBeans IDE 7.4, but in development builds for the next release of NetBeans IDE.For example, with NetBeans IDE development builds you're able to: register TomEE as a server in the Services window (TomEE has several distributions, e.g., one can use the "with JAX-RS" one, for example) create a Java EE 6 web project (e.g., Maven based) against this server create JPA entities from database create JAX-RS classes from JPA entities create JSF pages from JPA entities the IDE lets you create a new data source for TomEE and deploy it to the server the IDE figures out the components that are already packaged in TomEE, and the fact that (unlike with regular Tomcat), it does not need to package any components such as JSF implementation, persistence provider, or JAX-RS runtime, so that the resulting WAR file is very small the IDE can also do "deploy on save" with TomEE, so that your development cycle is very fast Adam Bien blogged about how he set up TomEE sometime ago, here. The official support in NetBeans IDE will be much more tightly integrated, simplifying the steps Adam describes. For example, the IDE does step 2 from Adam's blog for you, i.e., it sets up TomEE deployment roles. Moreover, it knows about all the technologies included in TomEE so that it can optimize the packaging; it knows about TomEE's persistence setup; it can work with TomEE data sources, etc. Below you see a Maven-based Java EE 6 PrimeFaces application (all entities and JSF pages generated from a database) deployed to TomEE in NetBeans IDE: And here's the management console for configuring and finetuning TomEE in NetBeans IDE: When I tried out the NetBeans IDE development build and TomEE, to see how everything fits together, I was surprised at how fast TomEE started up. Not sure what they did to it, but seems like a server on steroids. And setting it up in NetBeans IDE was trivial. Add the simple set up of TomEE in NetBeans IDE to the many benefits that the widely praised out of the box NetBeans Maven tools make possible, together with the fact that not one single plugin had to be installed to get everything you see described here up and running... and you have a really powerful combination of dev tools, all for free.

    Read the article

  • Exit Infragistics, Enter Telerik

    - by Anthony Trudeau
    Today I made the purchase of the Premium Collection of components from Telerik.  This follows an evaluation I’ve been doing to replace the Infragistics components we currently use for Windows Forms, ASP.NET MVC, and WPF. It was not a formal evaluation.  I had already decided to move the company away from Infragistics.  That decision was mostly born out of frustration with support over using the Infragistics components in my first production MVC application. One such issue was a simple scenario where you have a model that has a scalar property that can be one value out of a list.  The built-in combobox does this, but I was told by Infragistics support that they didn’t support it – and it took them several emails and days of waiting between responses to determine that.  I implemented this in Telerik in a minute not including the several minutes it took me to get a rudimentary understanding for the component and its API. Here’s the code using the built-in combobox:@Html.DropDownListFor(x => x.VendorId, new SelectList(ViewBag.Vendors, "VendorId", "VendorName", Model.VendorId), "Select Id") Here’s the code using the Telerik combobox:@(Html.Telerik().ComboBoxFor(model => model.VendorId) .AutoFill(true) .BindTo(new SelectList(ViewBag.Vendors, "VendorId", "VendorName", Model.VendorId)) )   I chose Telerik over other competitors based on the professional appearance of their website, and how easy it was to find information.  I’d like to say I had time to evaluate other Infragistics competitors.  Due to time constraints I had to make an initial decision based on superficial, but still important things. I picked Telerik with the plan to only look further at other companies if my evaluation didn’t meet my expectations.  Luckily they did, because I didn’t relish the thought of carving out more time to evaluate another set of components. Overall my experience with Telerik has been superior to Infragistics in every way.  The installation was easy using their control panel installer application.  Getting up to speed has been easy.  And the communication from Telerik has met my expectations.  And we’ll continue to be good as long as I don’t start getting email messages from a sales rep saying that they want to talk to me about training and consulting – I’m looking at you Infragistics.

    Read the article

  • 5 Plugins To improve the WordPress WYSIWYG Editor

    - by Matt
    TinyMCE, is a web-based platform-independent control for JavaScript/HTML WYSIWYG editor. It released by Moxiecode Systems AB as open source software. CKEditor For WordPress CKEditor is a text editor used inside web pages. You can see the similar text when you are going publishing the text by this editor. CKEditor is compatible with all modern browsers [...] Related posts:Open Source WYSIWYG Text Editors Some Popular WYSIWYG Editors 10 Useful Admin WordPress Plugins

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >