Search Results

Search found 25401 results on 1017 pages for 'adobe partner program'.

Page 724/1017 | < Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >

  • Video Synthesis - Making waves, patterns, gradients...

    - by Nathan
    I'm writing a program to generate some wild visuals. So far I can paint each pixel with a random blue value: for (y = 0; y < YMAX; y++) { for (x = 0; x < XMAX; x++) { b = rand() % 255; setPixelColor(x,y,r,g,b); } } I'd like to do more than just make blue noise, but I'm not sure where to start (Google isn't helping me much today), so it would be great if you could share anything you know on the subject or some links to related resources.

    Read the article

  • dlopen() with dependencies between libraries

    - by peastman
    My program uses plugins, that are loaded dynamically with dlopen(). The locations of these plugins can be arbitrary, so they aren't necessarily in the library path. In some cases, one plugin needs to depend on another plugin. So if A and B are dynamic libraries, I'll first load A, then load B which uses symbols defined in A. My reading of the dlopen() documentation implies that if I specify RTLD_GLOBAL this should all work. But it doesn't. When I call dlopen() on the second library, it fails with an error saying it couldn't find the first one (which had already been loaded with dlopen()): Error loading library /usr/local/openmm/lib/plugins/libOpenMMRPMDOpenCL.dylib: dlopen(/usr/local/openmm/lib/plugins/libOpenMMRPMDOpenCL.dylib, 9): Library not loaded: libOpenMMOpenCL.dylib Referenced from: /usr/local/openmm/lib/plugins/libOpenMMRPMDOpenCL.dylib Reason: image not found How can I make this work?

    Read the article

  • Avoiding Agnostic Jagged Array Flattening in Powershell

    - by matejhowell
    Hello, I'm running into an interesting problem in Powershell, and haven't been able to find a solution to it. When I google (and find things like this post), nothing quite as involved as what I'm trying to do comes up, so I thought I'd post the question here. The problem has to do with multidimensional arrays with an outer array length of one. It appears Powershell is very adamant about flattening arrays like @( @('A') ) becomes @( 'A' ). Here is the first snippet (prompt is , btw): > $a = @( @( 'Test' ) ) > $a.gettype().isarray True > $a[0].gettype().isarray False So, I'd like to have $a[0].gettype().isarray be true, so that I can index the value as $a[0][0] (the real world scenario is processing dynamic arrays inside of a loop, and I'd like to get the values as $a[$i][$j], but if the inner item is not recognized as an array but as a string (in my case), you start indexing into the characters of the string, as in $a[0][0] -eq 'T'). I have a couple of long code examples, so I have posted them at the end. And, for reference, this is on Windows 7 Ultimate with PSv2 and PSCX installed. Consider code example 1: I build a simple array manually using the += operator. Intermediate array $w is flattened, and consequently is not added to the final array correctly. I have found solutions online for similar problems, which basically involve putting a comma before the inner array to force the outer array to not flatten, which does work, but again, I'm looking for a solution that can build arrays inside a loop (a jagged array of arrays, processing a CSS file), so if I add the leading comma to the single element array (implemented as intermediate array $y), I'd like to do the same for other arrays (like $z), but that adversely affects how $z is added to the final array. Now consider code example 2: This is closer to the actual problem I am having. When a multidimensional array with one element is returned from a function, it is flattened. It is correct before it leaves the function. And again, these are examples, I'm really trying to process a file without having to know if the function is going to come back with @( @( 'color', 'black') ) or with @( @( 'color', 'black'), @( 'background-color', 'white') ) Has anybody encountered this, and has anybody resolved this? I know I can instantiate framework objects, and I'm assuming everything will be fine if I create an object[], or a list<, or something else similar, but I've been dealing with this for a little bit and something sure seems like there has to be a right way to do this (without having to instantiate true framework objects). Code Example 1 function Display($x, [int]$indent, [string]$title) { if($title -ne '') { write-host "$title`: " -foregroundcolor cyan -nonewline } if(!$x.GetType().IsArray) { write-host "'$x'" -foregroundcolor cyan } else { write-host '' $s = new-object string(' ', $indent) for($i = 0; $i -lt $x.length; $i++) { write-host "$s[$i]: " -nonewline -foregroundcolor cyan Display $x[$i] $($indent+1) } } if($title -ne '') { write-host '' } } ### Start Program $final = @( @( 'a', 'b' ), @('c')) Display $final 0 'Initial Value' ### How do we do this part ??? ########### ## $w = @( @('d', 'e') ) ## $x = @( @('f', 'g'), @('h') ) ## # But now $w is flat, $w.length = 2 ## ## ## # Even if we put a leading comma (,) ## # in front of the array, $y will work ## # but $w will not. This can be a ## # problem inside a loop where you don't ## # know the length of the array, and you ## # need to put a comma in front of ## # single- and multidimensional arrays. ## $y = @( ,@('D', 'E') ) ## $z = @( ,@('F', 'G'), @('H') ) ## ## ## ########################################## $final += $w $final += $x $final += $y $final += $z Display $final 0 'Final Value' ### Desired final value: @( @('a', 'b'), @('c'), @('d', 'e'), @('f', 'g'), @('h'), @('D', 'E'), @('F', 'G'), @('H') ) ### As in the below: # # Initial Value: # [0]: # [0]: 'a' # [1]: 'b' # [1]: # [0]: 'c' # # Final Value: # [0]: # [0]: 'a' # [1]: 'b' # [1]: # [0]: 'c' # [2]: # [0]: 'd' # [1]: 'e' # [3]: # [0]: 'f' # [1]: 'g' # [4]: # [0]: 'h' # [5]: # [0]: 'D' # [1]: 'E' # [6]: # [0]: 'F' # [1]: 'G' # [7]: # [0]: 'H' Code Example 2 function Display($x, [int]$indent, [string]$title) { if($title -ne '') { write-host "$title`: " -foregroundcolor cyan -nonewline } if(!$x.GetType().IsArray) { write-host "'$x'" -foregroundcolor cyan } else { write-host '' $s = new-object string(' ', $indent) for($i = 0; $i -lt $x.length; $i++) { write-host "$s[$i]: " -nonewline -foregroundcolor cyan Display $x[$i] $($indent+1) } } if($title -ne '') { write-host '' } } function funA() { $ret = @() $temp = @(0) $temp[0] = @('p', 'q') $ret += $temp Display $ret 0 'Inside Function A' return $ret } function funB() { $ret = @( ,@('r', 's') ) Display $ret 0 'Inside Function B' return $ret } ### Start Program $z = funA Display $z 0 'Return from Function A' $z = funB Display $z 0 'Return from Function B' ### Desired final value: @( @('p', 'q') ) and same for r,s ### As in the below: # # Inside Function A: # [0]: # [0]: 'p' # [1]: 'q' # # Return from Function A: # [0]: # [0]: 'p' # [1]: 'q' Thanks, Matt

    Read the article

  • Is there a difference between boost iostream mapped file and boost interprocess mapped file?

    - by Yijinsei
    I want to create a mapped binary file into memory; however I am not sure how to create the file to be mapped into the system. I read the documentation several times and realize there are 2 mapped file implementations, one in iostream and the other in interprocess. Do you guys have any idea on how to create a mapped file into shared memory? I am trying to allow a multi-threaded program to read an array of large double written in a binary file format. Also what is the difference between the mapped file in iostream and interprocess?

    Read the article

  • Oracle SqlPlus Command Line: There's a way to concatenate set options?

    - by Lex
    Heya, I need to set up some SET options in Oracle SQLplus command line program each time I use it, such as SET HEADING OFF and the likes to beautify my results. I found that I always have to input each line separately so Set different options and this is becoming annoying since I need to access it many times a day. I found that there's no way to separate different SET commands with semicolumns because it doesn't accept it: SET HEADING OFF; SET LINESIZE 100; returns an error A solution could be adding them to a control script and create a shell alias, but I know control scripts execute and then exit and don't return you control over the command line. So, anybody knows another solution? Or am I missing something?

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Ruby on Rails: How to verify haml files syntax within rails project?

    - by Acidburn2k
    I've installed HAML into my project and it is working like a charm - the templates are beeing rendered without a problem. My question is how can I do the rendering on the command line, by using HAML program. That would be super for debugging purposes, meantime while I try to compile HAML file I get the error on first Rails related Ruby code to be found: % cat app/views/dashboard/index.html.haml - title "Home" %p Lorem ipsum dolor sit amet... % haml app/views/dashboard/index.html.haml Exception on line 1: undefined method `title' for #<Object:0xb73283b0> Use --trace for backtrace. Page is rendered fine returned correctly through the webserver.

    Read the article

  • VB.NET localization

    - by PandaNL
    In my winform app in VB.NET I want to use the localization option. But i have a few questions/problems. I'm using a menu strip to select an other language. But it seems that is doesn't change my menustip text to my selected language. It does change my labels, buttons, and textboxes but menu strips don't seem to change when I choose another language. Also is it possible to get those resx files such as MyForm.fr-FR.resx compiled so it isn't an external file outside my app? Or to get those files in an Language folder at the same location of my app, so i don't have all those fr-FR, nl-Nl folders in the same location as my program?

    Read the article

  • Scala isn't allowing me to execute a batch file whose path contains spaces.Same Java code does.What

    - by Geo
    Here's the code I have: var commandsBuffer = List[String]() commandsBuffer ::= "cmd.exe" commandsBuffer ::= "/c" commandsBuffer ::= '"'+vcVarsAll.getAbsolutePath+'"' commandsBuffer ::= "&&" otherCommands.foreach(c => commandsBuffer ::= c) val asArray = commandsBuffer.reverse.toArray val processOutput = processutils.Proc.executeCommand(asArray,true) return processOutput otherCommands is an Array[String], containing the following elements: vcbuild /rebuild path to a .sln file vcVarsAll contains the path to Visual Studio's vcvarsall.bat. It's path is C:\tools\microsoft visual studio 2005\vc\vcvarsall.bat. The error I receive is: 'c:\Tools\Microsoft' is not recognized as an internal or external command, operable program or batch file.. The processutils.Proc.executeCommand has the following implementation: def executeCommand(params:Array[String],display:Boolean):(String,String) = { val process = java.lang.Runtime.getRuntime.exec(params) val outStream = process.getInputStream val errStream = process.getErrorStream ... } The same code, executed from Java/Groovy works. What am I doing wrong?

    Read the article

  • How to insert random characters in a text file at random positions using C

    - by Shantanu Gupta
    I m writing a program to insert random characters in a text file in between the text so that no one can understand this text. eg. suppose this is my text file a.txt with content as "Hi my name is abc. I like to play XYZ" Now i will cal a random function in C and get the 26 modulus random no to get the character to be inserted at random position. eg. "Him mayn mae lkd". etc How can i insert this random character in between the file.

    Read the article

  • NullPointerException with CallableStatement.getResultSet()

    - by Raj
    Hello, I have a stored proc in SQL Server 2005, which looks like the following (simplified) CREATE PROCEDURE FOO @PARAMS AS BEGIN -- STEP 1: POPULATE tmp_table DECLARE @tmp_table TABLE (...) INSERT INTO @tmp_table SELECT * FROM BAR -- STEP 2: USE @tmp_table FOR FINAL SELECT SELECT abc, pqr FROM BAZ JOIN @tmp_table ON some_criteria END When I run this proc from SQL Server Management Studio, things work fine. However, when I call the same proc from a Java program, using something like: cs = connection.prepareCall("exec proc ?,"); cs.setParam(...); rs = cs.getResultSet(); // BOOM - Null! while(rs.next()) {...} // NPE! I fail to understand why the first result set returned is NULL. Can someone explain this to me? As a workaround, if I check cs.getMoreResults() and if true, try another getResultSet() - THIS time it returns the proper result set. Any pointers please? (I'm using JTDS drivers, if it matters) Thanks, Raj

    Read the article

  • reference suggestion in latex windows (multi-file structure)

    - by voodoomsr
    There is some editor of latex that managed the labels for a multifile document?. I tried with TexMaker and LED, and they offers me suggestions about the labels present in the actual document and in the "Master" document but not the labels present in other files of the structure. I made a script to find those other labels but it will be real good that the suggestions covers all the file structure automatically. Finally i tried with Vim and Latex-Suite, this extension has that posibility but i have problem configuring the grep program despite having followed all the instructions found in their website and in the help files, :(. the structure is like this: document.tex mystyle.sty mybibliography.bib tex/file1.tex tex/file2.tex tex/... img/img1.png img/img2.png . . .

    Read the article

  • how does ` cat << EOF` work in bash?

    - by hasen j
    I needed to write a script to enter multi-line input to a program (psql) After a big of googling, I found the following syntax works: cat << EOF | psql ---params BEGIN; `pg_dump ----something` update table .... statement ...; END; EOF This correctly concatenates all these strings and passes the result as an input to psql. but I have no idea how/why it works, can some one please explain? I'm referring mainly to cat << EOF, I know > outputs to a file, >> appends to a file, < reads input from file. What does << exactly do? And is there a man page for it?

    Read the article

  • Are some data structures more suitable for functional programming than others?

    - by Rob Lachlan
    In Real World Haskell, there is a section titled "Life without arrays or hash tables" where the authors suggest that list and trees are preferred in functional programming, whereas an array or a hash table might be used instead in an imperative program. This makes sense, since it's much easier to reuse part of an (immutable) list or tree when creating a new one than to do so with an array. So my questions are: Are there really significantly different usage patterns for data structures between functional and imperative programming? If so, is this a problem? What if you really do need a hash table for some application? Do you simply swallow the extra expense incurred for modifications?

    Read the article

  • Open Source Embedded Database Options for .Net Applications

    - by LonnieBest
    I have a .Net 4.0 WPF application that requires an embedded database. MS Access doesn't work on 64 bit Windows I'm told. And I having issues with SSCE: Unable to load DLL 'sqlceme35.dll': This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. (Exception from HRESULT: 0x800736B1) sqlceme35.dll is installed into my application's program files directory, so I can't seem to figure out why Windows XP Pro doesn't see it. I was wondering about other embedded databases options I might use that work on both 32 and 64 bit windows. Any suggestions?

    Read the article

  • The Internet of Things Is Really the Internet of People

    - by HCM-Oracle
    By Mark Hurd - Originally Posted on LinkedIn As I speak with CEOs around the world, our conversations invariably come down to this central question: Can we change our corporate cultures and the ways we train and reward our people as rapidly as new technology is changing the work we do, the products we make and how we engage with customers? It’s a critical consideration given today’s pace of disruption, which already is straining traditional management models and HR strategies. Winning companies will bring innovation and vision to their employees and partners by attracting people who will thrive in this emerging world of relentless data, predictive analytics and unlimited what-if scenarios. So, where are we going to find employees who are as familiar with complex data as I am with orderly financial statements and business plans? I’m not just talking about high-end data scientists who most certainly will sit at or near the top of the new decision-making pyramid. Global organizations will need creative and motivated people who will devote their time to manipulating, reviewing, analyzing, sorting and reshaping data to drive business and delight customers. This might seem evident, but my conversations with business people across the globe indicate that only a small number of companies get it. In the past few years, executives have been busy keeping pace with seismic upheavals, including the rise of social customer engagement, the rapid acceleration of product-development cycles and the relentless move to mobile-first. But all of that, I think, is the start of an uphill climb to the top of a roller-coaster. Today, about 10 billion devices across the globe are connected to the Internet. In a couple of years, that number will probably double, and not because we will have bought 10 billion more computers, smart phones and tablets. This unprecedented explosion of Big Data is being triggered by the Internet of Things, which is another way of saying that the numerous intelligent devices touching our everyday lives are all becoming interconnected. Home appliances, food, industrial equipment, pets, pharmaceutical products, pallets, cars, luggage, packaged goods, athletic equipment, even clothing will be streaming data. Some data will provide important information about how to run our businesses and lead healthier lives. Much of it will be extraneous. How does a CEO cope with this unimaginable volume and velocity of data, much less harness it to excite and delight customers? Here are three things CEOs must do to tackle this challenge: 1) Take care of your employees, take care of your customers. Larry Ellison recently noted that the two most important priorities for any CEO today revolve around people: Taking care of your employees and taking care of your customers. Companies in today’s hypercompetitive business environment simply won’t be able to survive unless they’ve got world-class people at all levels of the organization. CEOs must demonstrate a commitment to employees by becoming champions for HR systems that empower every employee to fully understand his or her job, how it ties into the corporate framework, what’s expected of them, what training is available, and how they can use an embedded social network to communicate, collaborate and excel. Over the next several years, many of the world’s top industrialized economies will see a turnover in the workforce on an unprecedented scale. Across the United States, Europe, China and Japan, the “baby boomer” generation will be retiring and, by 2020, we’ll see turnovers in those regions ranging from 10 to 30 percent. How will companies replace all that brainpower, experience and know-how? How will CEOs perpetuate the best elements of their corporate cultures in the midst of this profound turnover? The challenge will be daunting, but it can be met with world-class HR technology. As companies begin replacing up to 30 percent of their workforce, they will need thousands of new types of data-native workers to exploit the Internet of Things in the service of the Internet of People. The shift in corporate mindset here can’t be overstated. The CEO has to be at the forefront of this new way of recruiting, training, motivating, aligning and developing truly 21-century talent. 2) Start thinking today about the Internet of People. Some forward-looking companies have begun pursuing the “democratization of data.” This allows more people within a company greater access to data that can help them make better decisions, move more quickly and keep pace with the changing interests and demands of their customers. As a result, we’ve seen organizations flatten out, growing numbers of well-informed people authorized to make decisions without corporate approval and a movement of engagement away from headquarters to the point of contact with the customer. These are profound changes, and I’m a huge proponent. As I think about what the next few years will bring as companies become deluged with unprecedented streams of data, I’m convinced that we’ll need dramatically different organizational structures, decision-making models, risk-management profiles and reward systems. For example, if a car company’s marketing department mines incoming data to determine that customers are shifting rapidly toward neon-green models, how many layers of approval, review, analysis and sign-off will be needed before the factory starts cranking out more neon-green cars? Will we continue to have organizations where too many people are empowered to say “No” and too few are allowed to say “Yes”? If so, how will those companies be able to compete in a world in which customers have more choices, instant access to more information and less loyalty than ever before? That’s why I think CEOs need to begin thinking about this problem right now, not in a year or two when competitors are already reshaping their organizations to match the marketplace’s new realities. 3) Partner with universities to help create a new type of highly skilled workers. Several years ago, universities introduced new undergraduate as well as graduate-level programs in analytics and informatics as the business need for deeper insights into the booming world of data began to explode. Today, as the growth rate of data continues to soar, we know that the Internet of Things will only intensify that growth. Moreover, as Big Data fuels insights that can be shaped into products and services that generate revenue, the demand for data scientists and data specialists will go on unabated. Beyond that top-level expertise, companies are going to need data-native thinkers at all levels of the organization. Where will this new type of worker come from? I think it’s incumbent on the business community to collaborate with universities to develop new curricula designed to turn out graduates who can capitalize on the data-driven world that the Internet of Things is surely going to create. These new workers will create opportunities to help their companies in fields as diverse as product design, customer service, marketing, manufacturing and distribution. They will become innovative leaders in fashioning an entirely new type of workforce and organizational structure optimized to fully exploit the Internet of Things so that it becomes a high-value enabler of the Internet of People. Mark Hurd is President of Oracle Corporation and a member of the company's Board of Directors. He joined Oracle in 2010, bringing more than 30 years of technology industry leadership, computer hardware expertise, and executive management experience to his role with the company. As President, Mr. Hurd oversees the corporate direction and strategy for Oracle's global field operations, including marketing, sales, consulting, alliances and channels, and support. He focuses on strategy, leadership, innovation, and customers.

    Read the article

  • Return a pointer to a char array in C

    - by snitko
    I've seen a lot of questions on that on StackOverflow, but reading the answers did not clear that up for me, probably because I'm a total newbie in C programming. Here's the code: #include <stdio.h> char* squeeze(char s[], char c); main() { printf("%s", squeeze("hello", 'o')); } char* squeeze(char s[], char c) { int i, j; for(i = j = 0; s[i] != '\0'; i++) if(s[i] != c) s[j++] = s[i]; s[j] = '\0'; return s; } It compiles and I get segmentation fault when I run it. I've read this faq about about returning arrays and tried the 'static' technique that is suggested there, but still could not get the program working. Could anyone point out exactly what's wrong with it and what should I be paying attention in the future?

    Read the article

  • Can you hep me with my Perl homework?

    - by riya
    Could someone write simple Perl programs for the following scenarios: convert a list from {1,2,3,4,5,7,9,10,11,12,34} to {1-5,7,9-12,34} to sort a list of negative numbers to insert values to hash array there is a file with content: C1 c2 c3 c4 r1 r2 r3 r4 put it into an hash array where keys = {c1,c2,c3,c4} and values = {r1,r2,r3,r4} There are testcases running each testcase runs as a process and has a process ID. The logs are logged in a logfile process ID appended to each line. Prog to find out if the test case has passed or failed. The program shoud be running till the processes are running and display output.

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Why does does my stack overflow error occur after 518669 specifically?

    - by David
    I created a java program to count up toward infinity: class up { public static void up (int n) { System.out.println (n) ; up (n+1) ; } public static void main (String[] arg) { up (1) ; } } i didn't actually expect it to get there but the thing that i noticed that was a bit curious was that it stopped at the same number each time: 518669 what is the significance of this number? (or of this number +1 i suppose). (also as a bit of an aside question, I've been told that the way i format my code is bad [indentation and such] what am i doing that isn't desirable?)

    Read the article

  • what is the oldest glib version that a qt application can run with

    - by yan bellavance
    I am trying to build a standalone qt application (built on ubuntu and deployed on Red Hat 5.3, both 64 bits) after building a qt application that is statically linked to the qt library I tried to run the program on red hat and got an error saying libc.so.6 was not found and that GLIBC_2.9 or GLIBC_2.10 is not installed and needed. I tried doing a yum install glibc but then I get a message saying that glibc is up to date (its version is 2.0) I guees I am going to restart the build process but this time from a red hat installation. What do you sugges I should do in this case. My goal is to build a standalone qt application that only needs to run on red hat 5 (im pretty sure there is also going to be an issue with fontconfig.so but I can simply provide this library directly in the same directory as the app)

    Read the article

  • What happens when I load an assembly?

    - by Baddie
    In my ASP.NET MVC application, I have the following setup: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="bin;extras"/> I have referenced assemblies located in the extras folder in the views and they have worked perfectly (using <%@ Import Namespace="myNameSpace" %>). My questions What happens when that line is called? Where is the assembly loaded? Why is it that I can't overwrite the assembly located in the extras folder that contains myNameSpace with a newer version? (I get an error saying that the assembly is "open" in another program) Is there a way to overwrite the assembly with a newer version without having the application restart?

    Read the article

  • SqlPlus on mac osx 10.6 doesn't work

    - by lesce
    When i try to run this #sqlplus system@orcl it gives me this error SQL*Plus: Release 10.1.0.3.0 - Production on Tue Apr 20 02:24:41 2010 Copyright (c) 1982, 2004, Oracle. All rights reserved. Enter password: ERROR: ORA-12154: TNS:could not resolve the connect identifier specified the oracle server is working , I can connect through SQLDeveloper My .profile looks like this export PATH=/opt/local/bin:/opt/local/sbin:$PATH # Setting PATH for Python 3.1 # The orginal version is saved in .profile.pysave PATH="/Library/Frameworks/Python.framework/Versions/3.1/bin:${PATH}" DYLD_LIBRARY_PATH=/Users/lesce/instantclient export TNS_ADMIN=/Users/lesce/instantclient export ORACLE_SID="orcl" export DYLD_LIBRARY_PATH export PATH=$PATH:/Users/lesce/instantclient tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) listener.ora SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (SID_NAME = PLSExtProc) (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1) (PROGRAM = extproc) ) (SID_DESC = (SID_NAME = orcl) (ORACLE_HOME = /Users/oracle/oracle/product/10.2.0/db_1) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) ) My SqlDeveloper configuration username : sys role : sysdba connection type : basic hostname : localhost port : 1521 sid : orcl

    Read the article

  • Unit testing several implementation of the same trait/interface

    - by paradigmatic
    I program mostly in scala and java, using scalatest in scala and junit for unit testing. I would like to apply the very same tests to several implementations of the same interface/trait. The idea is to verify that the interface contract is enforced and to check Liskov substitution principle. For instance, when testing implementations of lists, tests could include: An instance should be empty, if and only if and only if it has zero size. After calling clear, the size sould be zero. Adding an element in the middle of a list, will increment by one the index of rhs elements. etc. What are the best practices ?

    Read the article

  • string parsing to double fails in C++

    - by helixed
    Here's a fun one I've been trying to figure out. I have the following program: #include <iostream> #include <string> #include <sstream> using namespace std; int main(int argc, char *argv[]) { string s("5"); istringstream stream(s); double theValue; stream >> theValue; cout << theValue << endl; cout << stream.fail(); } The output is: 0 1 I don't understand why this is failing. Could somebody please tell me what I'm doing wrong? Thanks, helixed

    Read the article

< Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >