Search Results

Search found 30549 results on 1222 pages for 'object orientation'.

Page 787/1222 | < Previous Page | 783 784 785 786 787 788 789 790 791 792 793 794  | Next Page >

  • First JSRs Proposed for Java EE 7

    - by Jacob Lehrbaum
    With the approval of Java SE 7 and Java SE 8 JSRs last month, attention is now shifting towards the Java EE platform.  While functionality pegged for Java EE 7 was previewed at least as early as Devoxx, the filing of these JSRs marks the first, officially proposed, specifications for the next generation of the popular application server standard.  Let's take a quick look at the proposed new functionality.Java Persistence API 2.1The first of the new proposed specifications is JSR 338: Java Persistence API (JPA) 2.1. JPA is designed for use with both Java EE and Java SE and: "deals with the way relational data is mapped to Java objects ("persistent entities"), the way that these objects are stored in a relational database so that they can be accessed at a later time, and the continued existence of an entity's state even after the application that uses it ends. In addition to simplifying the entity persistence model, the Java Persistence API standardizes object-relational mapping." (more about JPA)JAX-RS 2.0The second of the new Java specifications that have been proposed is JSR 339, otherwise known as JAX-RS 2.0. JAX-RS provides an API that enables the easy creation of web services using the Representational State Transfer (REST) architecture.  Key features proposed in the new JSR include a Client API, improved support for URIs, a Model-View-Controller architecture and much more!More informationOfficial proposal for Java Persistence 2.1 (jcp.org)Official proposal for JAX-RS 2.0 (jcp.org)Kicking off Java EE 7 with 2 JSRs: JAX-RS 2.0 / JPA 2.1 (the Aquarium)

    Read the article

  • Is there a clean separation of my layers with this attempt at Domain Driven Design in XAML and C#

    - by Buddy James
    I'm working on an application. I'm using a mixture of TDD and DDD. I'm working hard to separate the layers of my application and that is where my question comes in. My solution is laid out as follows Solution MyApp.Domain (WinRT class library) Entity (Folder) Interfaces(Folder) IPost.cs (Interface) BlogPosts.cs(Implementation of IPost) Service (Folder) Interfaces(Folder) IDataService.cs (Interface) BlogDataService.cs (Implementation of IDataService) MyApp.Presentation(Windows 8 XAML + C# application) ViewModels(Folder) BlogViewModel.cs App.xaml MainPage.xaml (Contains a property of BlogViewModel MyApp.Tests (WinRT Unit testing project used for my TDD) So I'm planning to use my ViewModel with the XAML UI I'm writing a test and define my interfaces in my system and I have the following code thus far. [TestMethod] public void Get_Zero_Blog_Posts_From_Presentation_Layer_Returns_Empty_Collection() { IBlogViewModel viewModel = _container.Resolve<IBlogViewModel>(); viewModel.LoadBlogPosts(0); Assert.AreEqual(0, viewModel.BlogPosts.Count, "There should be 0 blog posts."); } viewModel.BlogPosts is an ObservableCollection<IPost> Now.. my first thought is that I'd like the LoadBlogPosts method on the ViewModel to call a static method on the BlogPost entity. My problem is I feel like I need to inject the IDataService into the Entity object so that it promotes loose coupling. Here are the two options that I'm struggling with: Not use a static method and use a member method on the BlogPost entity. Have the BlogPost take an IDataService in the constructor and use dependency injection to resolve the BlogPost instance and the IDataService implementation. Don't use the entity to call the IDataService. Put the IDataService in the constructor of the ViewModel and use my container to resolve the IDataService when the viewmodel is instantiated. So with option one the layers will look like this ViewModel(Presentation layer) - Entity (Domain layer) - IDataService (Service Layer) or ViewModel(Presentation layer) - IDataService (Service Layer)

    Read the article

  • Help with complex MVVM (multiple views)

    - by jsjslim
    I need help creating view models for the following scenario: Deep, hierarchical data Multiple views for the same set of data Each view is a single, dynamically-changing view, based on the active selection Depending on the value of a property, display different types of tabs in a tab control My questions: Should I create a view-model representation for each view (VM1, VM2, etc)? 1. Yes: a. Should I model the entire hierarchical relationship? (ie, SubVM1, HouseVM1, RoomVM1) b. How do I keep all hierarchies in sync? (e.g, adding/removing nodes) 2. No: a. Do I use a huge, single view model that caters for all views? Here's an example of a single view Figure 1: Multiple views updated based on active room. Notice Tab control Figure 2: Different active room. Multiple views updated. Tab control items changed based on object's property. Figure 3: Different selection type. Entire view changes

    Read the article

  • Need recommendation for transferring ASP.NET MVC skills to PHP

    - by Tuck
    I am looking to translate my skills in .NET to PHP - specifically in regards to ASP.NET MVC. At work I am currently using .NET MVC 2.0 on a variety of projects and thoroughly enjoy the platform. Specifically I enjoy the very minimal configuration required to get a project up and running (just create the project, define routes, and start coding), as well as the ability for controller actions to return different items (i.e. ActionResult, JsonResult). Another piece I really like is the way the view/model interaction can be handled. For example I like being able to call return View(model) and having a view page (.aspx) load and having the full model object available to the view, regardless of the model type. I'm looking for a PHP implementation of MVC that is the most similiar to what I am already familiar with. I don't anything apart from the MVC functionality. I've looked at Zend, Symfony, CodeIgniter, etc. and, while they look like they'll be fun to play with in the future, they provide much more functionality than I need. I'd prefer to write my own DAL, form helpers, delegate handlers, authentication/ACL pieces, etc. In short, I just need something to handle the routing and view interactions and will worry about the model implementation myself. Can someone please point me to some lightweight code that accomplishes or comes close to accomplishing my objectives above. Or, can someone identify just the portions of a larger framework that do the same (again, I'm not currently interested in implementing something on a big framework, just the MVC portion and want to implement the model portion myself as much as possible). Thanks in advance.

    Read the article

  • Not use CSS definitions for one <FORM>

    - by Svisstack
    I have template from themeforest and i dont want edit css from this template, because i don't have time for it. But i want integrate paypal buttons to my webpage, problem is paypal button use tag for selection payment option. I have overloaded style for tag and this not look like should. How to not use CSS for this element. I dont want use and if i don't must then i dont want edit this CSS;-) This css look wired, i must edit her to solve this problem? What is best solution for this? /*//// - Forms - ////*/ form { margin-bottom:20px; } body.ie7 form, body.ie8 { margin-bottom:40px; } form p { margin-bottom:15px; } form label { float:left; width:140px; margin-top:5px; } form input, form textarea, form select { padding:10px 5px; background:#fff url(../img/bg-input.gif) repeat-x top; border:1px solid #D9D9D9; width:448px; border-radius:3px; -moz-border-radius:3px; -webkit-border-radius:3px; } form input.small { width:35px; } html, body, div, span, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, abbr, address, cite, code, del, dfn, em, img, ins, kbd, q, samp, small, strong, sub, sup, var, b, i, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, figure, footer, header, hgroup, menu, nav, section, menu, time, mark, audio, video { margin:0; padding:0; border:0; outline:0; font-size:100%; vertical-align:baseline; background:transparent; } Can anyone help me?

    Read the article

  • How does this game loop actually work?

    - by Nicolai
    I read this playfulJS post, about ray-casting: http://www.playfuljs.com/a-first-person-engine-in-265-lines/ It looks really interested, so I decided to look at his javascript. I am no expert in javascript, so I quickly got lost. It's the game loop "object" that really gets me. I simply don't understand how it works. From the code: function GameLoop() { this.frame = this.frame.bind(this); this.lastTime = 0; this.callback = function() {}; } GameLoop.prototype.start = function(callback) { this.callback = callback; requestAnimationFrame(this.frame); }; GameLoop.prototype.frame = function(time) { var seconds = (time - this.lastTime) / 1000; this.lastTime = time; if (seconds < 0.2) this.callback(seconds); requestAnimationFrame(this.frame); }; var loop = new GameLoop(); loop.start(function frame(seconds) { map.update(seconds); player.update(controls.states, map, seconds); camera.render(player, map); }); Now, what really confuses me here, is this bind stuff and how this actually loops. I am guessing, that if less than 0.2 seconds have passed, since the last time the loop was run, it simply goes back to re-check the time. If more than 0.2 seconds have passed, it leaves the frame function, and executes the 3 lines in the loop. But, if this is true, then how does the loop.start() get called again? And what on earth is the meaning of this.frame = this.frame.bind(this);? I've looked up prototypes bind() but I really don't understand it.

    Read the article

  • Is It Time To Specialize?

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/18/is-it-time-to-specialize.aspx Over my career I have made a living as a generalist.  I have been a jack of all trades and a master of none.  It has served me well in that I am able to move from one technology to the other quickly and make myself productive.  Where it becomes a problem is deep knowledge.  I am constantly digging for the things that aren’t basic knowledge.  How do you make a product like WCF or Windows RT do more than just “Hello World”? As an architect I need to be a jack of all trades.  This is what helps me to bring the big picture of a project into focus for developers with different skills to accomplish the goals of the project. It is a key when the mix technologies crosses Windows, Unix and Mainframe with different languages and databases.  The larger the company that the project is for the more likely this scenario will arise. As a consultant and a developer I need to have specialized skills in order to get the job done efficiently.  if I have a SharePoint or Windows Phone project knowing the object model details and possible roadblocks of the technology allow me to stay within budgets as well as better advise the client on technology decisions. What is the solution?  Constant learning and associating with developers who specialize in a variety of technologies is the best thing you can do.  You may have thought you were done with classes when you left college, but in this industry you need to constantly be learning new products and languages.  The ultimate answer is you must generally specialize.  Learn as many subject areas as possible, but go deep when ever you can.  Sleep is overrated.  Good luck. del.icio.us Tags: software development,software architecture,specialization,generalist

    Read the article

  • Teaching myself, as a physicist, to become a better programmer

    - by user787267
    I've always liked physics, and I've always liked coding, so when I got the offer for a PhD position doing numerical physics (details are not relevant, it's mostly parallel programming for a cluster) at a university, it was a no-brainer for me. However, as most physicists, I'm self taught. I don't have broad background knowledge about how to code in an object oriented way, or the name of that specific algorithm that optimizes the search in some kD tree. Since all my work so far has been more concerned about the physics and the scientific results, I undoubtedly have some bad habits - more so because my coding is my own, and not really teamwork. I have mostly used C since it is very straightforward and "what you write is what you get" - no need for fancy abstractions. However, I have recently switched to C++ since I'd like to learn more about the power that comes with abstraction, and it's pretty C-like (syntax-wise at least). How do I teach myself to code in a good, abstract way like a graduate in computer science? I know my code is efficient, but I want it to be elegant as well, and readable. Keep in mind that I don't have time to read several 1000-page tomes about abstract programming. I need to spend time on actual, physics related research (my supervisor would laugh at me if he knew I spent time thinking about how to program elegantly). How do I assess if my work is also good from a programmer's perspective?

    Read the article

  • Jump and run HTML5 Game Framework

    - by user1818924
    We're developing a jump and run game with HTML5 and JavaScript and have to build an own game framework for this. Here we have some difficulties and would like to ask you for some advice: we have a "Stage" object, which represents the root of our game and is a global div-wrapper. The stage can contain multiple "Scenes", which are also div-elements. We would implement a Scene for the playing task, for pause, etc. and switch between them. Each scene can therefore contain multiple "Layers", representing a canvas. These Layer contain "ObjectEntities", which represent images or other shapes like rectangles, etc. Each Objectentity has its own temporaryCanvas, to be able to draw images for one entity, whereas another contains a rectangle. We set an activeScene in our Stage, so when the game is played, just the active scene is drawn. Calling activeScene.draw(), calls all sublayers to draw, which draw their entities (calling drawImage(entity.canvas)). But is this some kind of good practive? Having multiple canvas to draw? Each gameloop every layer-context is cleared and drawn again. E.g. we just have a still Background-Layer, … wouldn't it be more useful to draw this once and not to clear it everytime and redraw it? Or should we use a global canvas for example in the Stage and just use this canvas to draw? But we thought this would be to expensive... Other question: Do you have any advice how we could dive into implementing an own framework? Most stuff we find online relies on existing frameworks or they just implement their game without building a framework.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • What design patterns are the worst or most narrowly defined?

    - by Akku
    For every programming project, Managers with past programming experience try to shine when they recommend some design patterns for your project. I like design patterns when they make sense or if you need a scalbale solution. I've used Proxies, Observers and Command patterns in a positive way for example, and do so every day. But I'm really hesitant to use say a Factory pattern if there's only one way to create an object, as a factory might make it all easier in the future, but complicates the code and is pure overhead. So, my question is in respect to my future career and my answer to manager types throwing random pattern-names around: Which design patterns did you use, that threw you back overall? Which are the worst design patterns, that you shouldn't have a look at if it's not that only single situation where it makes sense (read: which design patterns are very narrowly defined)? (It's like I was looking for the negative reviews of an overall good product of amazon to see what bugged people most in using design patterns). And I'm not talking about Anti-Patterns here, but about Patterns that are usually thought of as "good" patterns. Edit: As some answered, the problem is most often that patterns are not "bad" but "used wrong". If you know patterns, that are often misused or even difficult to use, they would also fit as an answer.

    Read the article

  • Checking if your SIMPLE databases need a log backup

    - by Fatherjack
    Hopefully you have read the blog by William Durkin explaining why your SIMPLE databases need a log backup in some cases. There is a SQL Server bug that means in some cases databases are marked as being in SIMPLE RECOVERY but have a log wait type that shows they are not properly configured. Please read his blog for the full explanation and a great description of how to reproduce the issue. As part of our (William happens to be my Boss) work to recover our affected databases I wrote this small PowerShell script to quickly check our servers for databases that needed the attention that William details.  cls $Servers = “Server01″,”Server02″,”etc”,”etc” foreach($Server in $Servers){ write-host “************” $server “****************”     $server = New-Object Microsoft.sqlserver.management.smo.server $Server     foreach($db in $Server.databases){         $db | where {$_.RecoveryModel -eq “Simple” -and $_.logreusewaitstatus -ne “nothing”} | select name, LogReuseWaitStatus     } } If you get any results from this query then you should consult Williams blog for the details on what action you should take. This script does give out false positives if in some circumstances depending on how busy your databases are. Hopefully this will let you check your servers quickly and if you find any problems you can reference Williams blog to understand what you need to do.

    Read the article

  • Where can I find out the following info on python (coming from Ruby)

    - by Michael Durrant
    I'm coming from Ruby and Ruby on Rails to Python. Where can I find or find resources about: The command prompt, what is python's version of 'irb' django, what is a good resource for installing, using, etc. pythoncasts... is there anything like railscats, i.e. good video tutorials web sites with the api info about what version have what and which to use. info and recommendations on editors, plugins and IDE's common gotchas for newbies and good things to know at the outset scaling issues, common reasons what is the equivalent of 'gems', i.e. components I can plug in what are popular plugins for django authentication and forms similar to devise and simple_form testing, what's available, anything similar to rspec? database adapters - any preferences? framework info - is django MVC like rails? OO'yness. Is everything an object that gets send messages? Different paradign? syntax - anything like jslint for checking for well-formed code?

    Read the article

  • Codifying a natural language requirements spec

    - by ProfK
    I have a fairly technical functionality requirements spec, expressed in English prose, produced by my project manager. It is structured as a collection of UI tabs, where the requirements for each tab are expressed as a lit of UI fields and a list of business rules for the tab. Most business rules are for UI fields on a tab, e.g: a) Must be alphanumeric, max length 20. b) Must be a dropdown, with values from table x. c) Is mandatory. d) Is mandatory under certain conditions, e.g. another field is just populated, or has a specific value. Then other business rules get a little more complex. The spec is for a job application, so the central business object (table) is the Applicant, and we have several other tables with one-to-many relationships with applicant, such as Degree, HighSchool, PreviousEmployer, Diploma, etc. e) One such complex rule says a status field can only be assigned a certain value if a many-side record exists in at least one of the many-side tables. E.g. the Applicant has at least one HighSchool or at least one Diploma record. I am looking for advice on how to codify these requirements into a more structured specification defined in terms of tables, fields, and relationships, especially for the conditional rules for fields and for the presence of related records. Any suggestions and advice will be most welcome, but I would be overjoyed if i could find an already defined system or structure for expressing things like this.

    Read the article

  • WebSocket Samples in GlassFish 4 build 66 - javax.websocket.* package: TOTD #190

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket TOTD #189: Collaborative Whiteboard using WebSocket in GlassFish 4 The earlier blogs created a WebSocket endpoint as: import javax.net.websocket.annotations.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . Based upon the discussion in JSR 356 EG, the package names have changed to javax.websocket.*. So the updated endpoint definition will look like: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint("websocket")public class MyEndpoint { . . . The POM dependency is: <dependency> <groupId>javax.websocket</groupId> <artifactId>javax.websocket-api</artifactId> <version>1.0-b09</version> </dependency> And if you are using GlassFish 4 build 66, then you also need to provide a dummy EndpointFactory implementation as: import javax.websocket.WebSocketEndpoint;@WebSocketEndpoint(value="websocket", factory=MyEndpoint.DummyEndpointFactory.class)public class MyEndpoint { . . .   class DummyEndpointFactory implements EndpointFactory {    @Override public Object createEndpoint() { return null; }  }} This is only interim and will be cleaned up in subsequent builds. But I've seen couple of complaints about this already and so this deserves a short blog. Have you been tracking the latest Java EE 7 implementations in GlassFish 4 promoted builds ?

    Read the article

  • SD Card Reader not working in Ubuntu 12.04

    - by tripkane
    I have a read many other posts on this issue and believe that Ubuntu 12.04 is not even recognizing my SD Card Reader as just that: Computer Model: Metabox (Australian builder of Clevo laptops) / Clevo P150EM OS: Ubuntu 12.04 (64 Bit) CPU: Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz HD: 120GB Intel 550/520MB/s SSD According to the people who built my computer, the specs of the SD Card reader in my comp are as follows: Manufacture: Realtek Semiconduct Corp. Location: PCI bus 3 Hardware ID: PCI\Ven_10EC&DEV_5289&SUBSYS_51051558 Physical device object name: \Device\NTPNP_PCI0015 Here are the relevant outputs of the following commands run from the terminal: sudo lshw *-generic UNCLAIMED description: Unassigned class product: Realtek Semiconductor Co., Ltd. vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 version: 01 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list configuration: latency=0 resources: memory:f6a00000-f6a0ffff sudo lspci -v -nn 03:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device [10ec:5289] (rev 01) Subsystem: CLEVO/KAPOK Computer Device [1558:5105] Flags: bus master, fast devsel, latency 0, IRQ 4 Memory at f6a00000 (32-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 00 Capabilities: [b0] MSI-X: Enable- Count=1 Masked- Capabilities: [d0] Vital Product Data Capabilities: [100] Advanced Error Reporting Capabilities: [140] Virtual Channel Capabilities: [160] Device Serial Number 00-00-00-00-00-00-00-00 Does the unassigned details of these outputs mean that Ubunutu desn't know that the SD Card Reader is one and what do with it? and if so how should I go about fixing it?? Cheers ;)

    Read the article

  • ldap client cannot contact ldap server

    - by Van
    I have followed these instructions: https://help.ubuntu.com/12.04/serverguide/openldap-server.html#openldap-auth-config The ldap server works fine. I can log into it using an ldap account. However, I configured another Ubuntu 12.04 server as a ldap client for authentication but I cannot contact the server. Here is the error: On the client: # ldapsearch -Q -LLL -Y EXTERNAL -H ldapi://ldap01.domain.local -b cn=config dn ldap_sasl_interactive_bind_s: Can't contact LDAP server (-1) The server can receive requests: On the client: # telnet ldap01.domain.local 389 Trying 10.3.17.10... Connected to sisn01.domain.local. Escape character is '^]'. On the client: # ldapsearch -x -h ldap01.domain.local -b cn=config dn # extended LDIF # # LDAPv3 # base <cn=config> with scope subtree # filter: (objectclass=*) # requesting: dn # # search result search: 2 result: 32 No such object # numResponses: 1 On the server: # ps aux | grep slapd openldap 3759 0.0 0.2 564820 8228 ? Ssl 08:39 0:00 /usr/sbin/slapd -h ldap:/// ldapi:/// -g openldap -u openldap -F /etc/ldap/slapd.d I suspect I am missing a configuration parameter either on the server or on the client. I just cannot figure out what. Any help here would be appreciated.

    Read the article

  • Searching Your PL/SQL Source with Oracle SQL Developer

    - by thatjeffsmith
    Version 3.2.1 included a few tweaks along with several hundred bug fixes. One of those tweaks was the addition of ‘ALL_SOURCE’ as a selection for the Type drop down in the Find Database Object panel. Scroll ALL the way down to the bottom Searching the database for your code or objects can be expensive. The ALL_SOURCE view comes in pretty handy when I want to demo how to cancel long running queries or the Task Progress panel – did you know you can manage all of your long running queries here? Yeah, don’t run this I pretty much hosed our demo pod at Open World b/c I ran that same query but added an ORDER BY b.TEXT DESC to the query and blew up the TEMP space and filled the primary partition on the image. Fun stuff. Anyways, where was I going with this? Oh yeah, searching ALL_SOURCE can be expensive. So we took it out of the product for awhile. And now it’s back in. If you select the ‘ALL’ field, it doesn’t actually search EVERYTHING, because that would probably be less than helpful. So if you want to search your PL/SQL objects for a scrap or bit of code, use the ‘ALL_SOURCE’ option in v3.2.1 Double-Click on the search results to go to the code you’re looking for. Be careful what you search for. Just like any query, it could take awhile.

    Read the article

  • Adding and accessing custom sections in your C# App.config

    - by deadlydog
    So I recently thought I’d try using the app.config file to specify some data for my application (such as URLs) rather than hard-coding it into my app, which would require a recompile and redeploy of my app if one of our URLs changed.  By using the app.config it allows a user to just open up the .config file that sits beside their .exe file and edit the URLs right there and then re-run the app; no recompiling, no redeployment necessary. I spent a good few hours fighting with the app.config and looking at examples on Google before I was able to get things to work properly.  Most of the examples I found showed you how to pull a value from the app.config if you knew the specific key of the element you wanted to retrieve, but it took me a while to find a way to simply loop through all elements in a section, so I thought I would share my solutions here.   Simple and Easy The easiest way to use the app.config is to use the built-in types, such as NameValueSectionHandler.  For example, if we just wanted to add a list of database server urls to use in my app, we could do this in the app.config file like so: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDatabaseServers" type="System.Configuration.NameValueSectionHandler" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDatabaseServers> 10: <add key="localhost" value="localhost" /> 11: <add key="Dev" value="Dev.MyDomain.local" /> 12: <add key="Test" value="Test.MyDomain.local" /> 13: <add key="Live" value="Prod.MyDomain.com" /> 14: </ConnectionManagerDatabaseServers> 15: </configuration>   And then you can access these values in code like so: 1: string devUrl = string.Empty; 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: devUrl = connectionManagerDatabaseServers["Dev"].ToString(); 6: }   Sometimes though you don’t know what the keys are going to be and you just want to grab all of the values in that ConnectionManagerDatabaseServers section.  In that case you can get them all like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: foreach (var serverKey in connectionManagerDatabaseServers.AllKeys) 6: { 7: string serverValue = connectionManagerDatabaseServers.GetValues(serverKey).FirstOrDefault(); 8: AddDatabaseServer(serverValue); 9: } 10: }   And here we just assume that the AddDatabaseServer() function adds the given string to some list of strings.  So this works great, but what about when we want to bring in more values than just a single string (or technically you could use this to bring in 2 strings, where the “key” could be the other string you want to store; for example, we could have stored the value of the Key as the user-friendly name of the url).   More Advanced (and more complicated) So if you want to bring in more information than a string or two per object in the section, then you can no longer simply use the built-in System.Configuration.NameValueSectionHandler type provided for us.  Instead you have to build your own types.  Here let’s assume that we again want to configure a set of addresses (i.e. urls), but we want to specify some extra info with them, such as the user-friendly name, if they require SSL or not, and a list of security groups that are allowed to save changes made to these endpoints. So let’s start by looking at the app.config: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDataSection" type="ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection, ConnectionManagerUpdater" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDataSection> 10: <ConnectionManagerEndpoints> 11: <add name="Development" address="Dev.MyDomain.local" useSSL="false" /> 12: <add name="Test" address="Test.MyDomain.local" useSSL="true" /> 13: <add name="Live" address="Prod.MyDomain.com" useSSL="true" securityGroupsAllowedToSaveChanges="ConnectionManagerUsers" /> 14: </ConnectionManagerEndpoints> 15: </ConnectionManagerDataSection> 16: </configuration>   The first thing to notice here is that my section is now using the type “ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection” (the fully qualified path to my new class I created) “, ConnectionManagerUpdater” (the name of the assembly my new class is in).  Next, you will also notice an extra layer down in the <ConnectionManagerDataSection> which is the <ConnectionManagerEndpoints> element.  This is a new collection class that I created to hold each of the Endpoint entries that are defined.  Let’s look at that code now: 1: using System; 2: using System.Collections.Generic; 3: using System.Configuration; 4: using System.Linq; 5: using System.Text; 6: using System.Threading.Tasks; 7:  8: namespace ConnectionManagerUpdater.Data.Configuration 9: { 10: public class ConnectionManagerDataSection : ConfigurationSection 11: { 12: /// <summary> 13: /// The name of this section in the app.config. 14: /// </summary> 15: public const string SectionName = "ConnectionManagerDataSection"; 16: 17: private const string EndpointCollectionName = "ConnectionManagerEndpoints"; 18:  19: [ConfigurationProperty(EndpointCollectionName)] 20: [ConfigurationCollection(typeof(ConnectionManagerEndpointsCollection), AddItemName = "add")] 21: public ConnectionManagerEndpointsCollection ConnectionManagerEndpoints { get { return (ConnectionManagerEndpointsCollection)base[EndpointCollectionName]; } } 22: } 23:  24: public class ConnectionManagerEndpointsCollection : ConfigurationElementCollection 25: { 26: protected override ConfigurationElement CreateNewElement() 27: { 28: return new ConnectionManagerEndpointElement(); 29: } 30: 31: protected override object GetElementKey(ConfigurationElement element) 32: { 33: return ((ConnectionManagerEndpointElement)element).Name; 34: } 35: } 36: 37: public class ConnectionManagerEndpointElement : ConfigurationElement 38: { 39: [ConfigurationProperty("name", IsRequired = true)] 40: public string Name 41: { 42: get { return (string)this["name"]; } 43: set { this["name"] = value; } 44: } 45: 46: [ConfigurationProperty("address", IsRequired = true)] 47: public string Address 48: { 49: get { return (string)this["address"]; } 50: set { this["address"] = value; } 51: } 52: 53: [ConfigurationProperty("useSSL", IsRequired = false, DefaultValue = false)] 54: public bool UseSSL 55: { 56: get { return (bool)this["useSSL"]; } 57: set { this["useSSL"] = value; } 58: } 59: 60: [ConfigurationProperty("securityGroupsAllowedToSaveChanges", IsRequired = false)] 61: public string SecurityGroupsAllowedToSaveChanges 62: { 63: get { return (string)this["securityGroupsAllowedToSaveChanges"]; } 64: set { this["securityGroupsAllowedToSaveChanges"] = value; } 65: } 66: } 67: }   So here the first class we declare is the one that appears in the <configSections> element of the app.config.  It is ConnectionManagerDataSection and it inherits from the necessary System.Configuration.ConfigurationSection class.  This class just has one property (other than the expected section name), that basically just says I have a Collection property, which is actually a ConnectionManagerEndpointsCollection, which is the next class defined.  The ConnectionManagerEndpointsCollection class inherits from ConfigurationElementCollection and overrides the requied fields.  The first tells it what type of Element to create when adding a new one (in our case a ConnectionManagerEndpointElement), and a function specifying what property on our ConnectionManagerEndpointElement class is the unique key, which I’ve specified to be the Name field. The last class defined is the actual meat of our elements.  It inherits from ConfigurationElement and specifies the properties of the element (which can then be set in the xml of the App.config).  The “ConfigurationProperty” attribute on each of the properties tells what we expect the name of the property to correspond to in each element in the app.config, as well as some additional information such as if that property is required and what it’s default value should be. Finally, the code to actually access these values would look like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDataSection = ConfigurationManager.GetSection(ConnectionManagerDataSection.SectionName) as ConnectionManagerDataSection; 3: if (connectionManagerDataSection != null) 4: { 5: foreach (ConnectionManagerEndpointElement endpointElement in connectionManagerDataSection.ConnectionManagerEndpoints) 6: { 7: var endpoint = new ConnectionManagerEndpoint() { Name = endpointElement.Name, ServerInfo = new ConnectionManagerServerInfo() { Address = endpointElement.Address, UseSSL = endpointElement.UseSSL, SecurityGroupsAllowedToSaveChanges = endpointElement.SecurityGroupsAllowedToSaveChanges.Split(',').Where(e => !string.IsNullOrWhiteSpace(e)).ToList() } }; 8: AddEndpoint(endpoint); 9: } 10: } This looks very similar to what we had before in the “simple” example.  The main points of interest are that we cast the section as ConnectionManagerDataSection (which is the class we defined for our section) and then iterate over the endpoints collection using the ConnectionManagerEndpoints property we created in the ConnectionManagerDataSection class.   Also, some other helpful resources around using app.config that I found (and for parts that I didn’t really explain in this article) are: How do you use sections in C# 4.0 app.config? (Stack Overflow) <== Shows how to use Section Groups as well, which is something that I did not cover here, but might be of interest to you. How to: Create Custom Configuration Sections Using Configuration Section (MSDN) ConfigurationSection Class (MSDN) ConfigurationCollectionAttribute Class (MSDN) ConfigurationElementCollection Class (MSDN)   I hope you find this helpful.  Feel free to leave a comment.  Happy Coding!

    Read the article

  • Teach Your Kid to Code (&hellip;and Vote early!)

    - by Steve Michelotti
    Next Tuesday I will be at the CMAP main meeting presenting Teach Your Kid to Code. Next Tuesday is of course Election Day so you have to make sure you vote early in order to get over to CMAP for the 7:00PM presentation. I will be co-presenting this talk with my 5th grade son. Here is the abstract: Have you ever wanted a way to teach your kid to code? For that matter, have you ever wanted to simply be able to explain to your kid what you do for a living? Putting things in a context that a kid can understand is not as easy as it sounds. If you are someone curious about these concepts, this is a “can’t miss” presentation that will be co-presented by Justin Michelotti (5th grader) and his father. Bring your kid with you to CMAP for this fun and educational session. We will show tools you may not have been aware of like SmallBasic and Kodu – we’ll even throw in a little Visual Studio and Windows 8! Concepts such as variables, conditionals, loops, and functions will be covered while we introduce object oriented concepts without any of the confusing words. Kids are not required for entry! I promise this will be an entertaining presentation! We hope to see you (and your kids) there. Click here for details.

    Read the article

  • avr-gcc 4.7 on ubuntu 12.04

    - by birky
    Please how can I install avr-gcc version 4.7 on ubuntu 12.04? :~$ avr-gcc -v Using built-in specs. COLLECT_GCC=avr-gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/avr/4.5.3/lto-wrapper Target: avr Configured with: ../src/configure -v --enable-languages=c,c++ --prefix=/usr/lib --infodir=/usr/share/info --mandir=/usr/share/man --bindir=/usr/bin --libexecdir=/usr/lib --libdir=/usr/lib --enable-shared --with-system-zlib --enable-long-long --enable-nls --without-included-gettext --disable-libssp --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=avr Thread model: single gcc version 4.5.3 (GCC) :~$ gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.6/lto-wrapper Target: x86_64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.6.3-1ubuntu5' --with-bugurl=file:///usr/share/doc/gcc-4.6/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.6 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.6 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-gnu-unique-object --enable-plugin --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)

    Read the article

  • Evolution of an Application: how to manage and improve core engine?

    - by Phil Carter
    The web application I work on has been live for a year now, but it's time for it to evolve and one of the ways in which it is evolving is into a multi-brand application - in this case several different companies using the application, different templates/content and some slight business logic changes between them. The problem I'm facing is implementing a best practice across the site where there are differences in business logic for each brand. These will mostly be very superficial, using a an alternative mailing list provider or capturing some extra data in a form. I don't want to have if(brand === x) { ... } else { ... } all over the site especially as most of what needs to be changed can be handled with extending the existing class. I've thought of several methods that could be used to instantiate the correct class, but I'm just not sure which is going to be best especially as some seem to lead to duplication of more code than should be necessary. Here's what I've considered: 1) Use a Static Loader similar to Zend_Loader which can take the class being requested, and has knowledge of the Brand and can then return the correct object. $class = App_Loader::getObject('User', $brand); 2) Factory classes. We use these in the application already for Products but we could utilise them here also to provide a transparent interface to the class. 3) Routing the page request to a specific brand controller. This however seems like it would duplicate a lot of code/logic. Is there a pattern or something else I should be considering to solve this problem? 4) How to manage a growing project that has multiple custom instances in production? Update This is a PHP application so the decisions on which class to load are made per request. There could be upwards of 100+ different 'brands' running.

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

  • Panning with the OpenGL Camera / View Matrix

    - by Pris
    I'm gonna try this again I've been trying to setup a simple camera class with OpenGL but I'm completely lost and I've made zero progress creating anything useful. I'm using modern OpenGL and the glm library for matrix math. To get the most basic thing I can think of down, I'd like to pan an arbitrarily positioned camera around. That means move it along its own Up and Side axes. Here's a picture of a randomly positioned camera looking at an object: It should be clear what the Up (Green) and Side (Red) vectors on the camera are. Even though the picture shows otherwise, assume that the Model matrix is just the identity matrix. Here's what I do to try and get it to work: Step 1: Create my View/Camera matrix (going to refer to it as the View matrix from now on) using glm::lookAt(). Step 2: Capture mouse X and Y positions. Step 3: Create a translation matrix mapping changes in the X mouse position to the camera's Side vector, and mapping changes in the Y mouse position to the camera's Up vector. I get the Side vector from the first column of the View matrix. I get the Up vector from the second column of the View matrix. Step 4: Apply the translation: viewMatrix = glm::translate(viewMatrix,translationVector); But this doesn't work. I see that the mouse movement is mapped to some kind of perpendicular axes, but they're definitely not moving as you'd expect with respect to the camera. Could someone please explain what I'm doing wrong and point me in the right direction with this camera stuff?

    Read the article

  • Building/Installing Required a52 Plugin

    - by user71139
    I am trying to compile and install the a52 plugin following the instructions from here: https://help.ubuntu.com/community/DigitalAC-3Pulseaudio This worked on Ubuntu 11.10 but gives me some errors when I try to compile the plugin on Ubuntu 12.04. I've searched for a solution however I couldn't find much on this topic in general, not to talk about a solution. I would really appreciate some help on this: bogdan@bogdan-desktop:~$ cd ~/tmp/ bogdan@bogdan-desktop:~/tmp$ cd alsa-plugins-1.0.25/ bogdan@bogdan-desktop:~/tmp/alsa-plugins-1.0.25$ make make all-recursive make[1]: Entering directory `/home/bogdan/tmp/alsa-plugins-1.0.25' Making all in oss make[2]: Entering directory `/home/bogdan/tmp/alsa-plugins-1.0.25/oss' /bin/bash ../libtool --tag=CC --mode=compile gcc -DHAVE_CONFIG_H -I. -I.. -Wall -g -I/usr/include/alsa -g -O2 -MT ctl_oss.lo -MD -MP -MF .deps/ctl_oss.Tpo -c -o ctl_oss.lo ctl_oss.c ../libtool: line 831: X--tag=CC: command not found ../libtool: line 864: libtool: ignoring unknown tag : command not found ../libtool: line 831: X--mode=compile: command not found ../libtool: line 997: *** Warning: inferring the mode of operation is deprecated.: command not found ../libtool: line 998: *** Future versions of Libtool will require --mode=MODE be specified.: command not found ../libtool: line 1141: Xgcc: command not found ../libtool: line 1141: X-DHAVE_CONFIG_H: command not found ../libtool: line 1141: X-I.: command not found ../libtool: line 1141: X-I..: command not found ../libtool: line 1141: X-Wall: command not found ../libtool: line 1141: X-g: command not found ../libtool: line 1141: X-I/usr/include/alsa: No such file or directory ../libtool: line 1141: X-g: command not found ../libtool: line 1141: X-O2: command not found ../libtool: line 1141: X-MT: command not found ../libtool: line 1141: Xctl_oss.lo: command not found ../libtool: line 1141: X-MD: command not found ../libtool: line 1141: X-MP: command not found ../libtool: line 1141: X-MF: command not found ../libtool: line 1141: X.deps/ctl_oss.Tpo: No such file or directory ../libtool: line 1141: X-c: command not found ../libtool: line 1192: Xctl_oss.lo: command not found ../libtool: line 1197: libtool: compile: cannot determine name of library object from `': command not found make[2]: *** [ctl_oss.lo] Error 1 make[2]: Leaving directory `/home/bogdan/tmp/alsa-plugins-1.0.25/oss' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/bogdan/tmp/alsa-plugins-1.0.25' make: *** [all] Error 2 bogdan@bogdan-desktop:~/tmp/alsa-plugins-1.0.25$

    Read the article

< Previous Page | 783 784 785 786 787 788 789 790 791 792 793 794  | Next Page >