Search Results

Search found 60836 results on 2434 pages for 'system io directory'.

Page 1620/2434 | < Previous Page | 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627  | Next Page >

  • Problem using yaml-cpp on OS X

    - by Thomas
    So I'm having trouble compiling my application which is using yaml-cpp I'm including "yaml.h" in my source files (just like the examples in the yaml-cpp wiki) but when I try compiling the application I get the following error: g++ -c -o entityresourcemanager.o entityresourcemanager.cpp entityresourcemanager.cpp:2:18: error: yaml.h: No such file or directory make: *** [entityresourcemanager.o] Error 1 my makefile looks like this: CC = g++ CFLAGS = -Wall APPNAME = game UNAME = uname OBJECTS := $(patsubst %.cpp,%.o,$(wildcard *.cpp)) mac: $(OBJECTS) $(CC) `pkg-config --cflags --libs sdl` `pkg-config --cflags --libs yaml-cpp` $(CFLAGS) -o $(APPNAME) $(OBJECTS) pkg-config --cflags --libs yaml-cpp returns: -I/usr/local/include/yaml-cpp -L/usr/local/lib -lyaml-cpp and yaml.h is indeed located in /usr/local/include/yaml-cpp Any idea what I could do? Thanks

    Read the article

  • Why doesn't .NET find the OpenSSL.NET dll?

    - by Lazlo
    EDIT (the whole question, it was too unclear) I want to use OpenSSL.NET The OpenSSL.NET install instructions page: INSTALL Make sure you have libeay32.dll and ssleay32.dll in the current working directory of your application or in your PATH. DONE In your .NET project, add a reference to the ManagedOpenSsl.dll assembly. DONE I have put libeay32.dll and ssleay32.dll in both my bin/Debug and bin/Release directories. I have also put them in system32. Here is my FULL code: using System; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { try { OpenSSL.Crypto.RSA rsa = new OpenSSL.Crypto.RSA(); } catch (Exception e) { Console.WriteLine(e.InnerException.Message); } Console.Read(); } } } I get the following error: (Unable to load DLL 'libeay32') Here is the Process Monitor log (upon request): What am I doing wrong? Why isn't the DLL found?

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Themes in Android?

    - by androidbase Praveen
    Hi All, i have an Idea to create Themes for Android Mobile. But i have no knowledge on that. i would need to know about the things what is the file format of theme for android? what kind of things i want to handle to change the themes.(i.e background, directory window, wallpaper, icon slector style,etc like that.) how to start to learn about this. sites and tutorials for beginners. sample applications and codes. if you passed out about anything above. Please share with me. its more helpful. thanks.

    Read the article

  • "from _json import..." - python

    - by RoseOfJericho
    Hello, all. I am inspecting the JSON module of python 3.1, and am currently in /Lib/json/scanner.py. At the top of the file is the following line: from _json import make_scanner as c_make_scanner There are five .py files in the module's directory: __init__ (two leading and trailing underscores, it's formatting as bold), decoder, encoder, scanner and tool. There is no file called "json". My question is: when doing the import, where exactly is "make_scanner" coming from? Yes, I am very new to Python!

    Read the article

  • Android SDK and AVD manager will not run from Eclipse after upgrade to SDK 5 and ADT 0.9.6

    - by user303944
    Using Windows 7, 64 bit system. Prior to upgrade I was able to run "Android SDK and AVD manager" from Eclipse via a tool bar icon and menu option, both of which still exist. However now nothing happens when I try to run the manager. As a result I can't start an emulator from within Eclipse. When I use Eclipse to run an Android app, the first emulator I installed is automatically started. Using Windows Explorer, I can still run the manager from the SDK directory in which the update was applied (the update didn't change the location of the SDK). If I run the manager and start multiple emulators and then Run an app from Eclipse, it sees the emulators and allows me to choose one as before. This is a satisfactory work-around, but it would be nice if the manager were fully integrated into Eclipse as it was before.

    Read the article

  • svn branch commit - experimental commit

    - by quano
    I've made some experimental code that I would like to save in the repository, but I don't want it on the main branch. How would you commit this to a branch? Maybe I got this wrong, but of what I've understood about branching, all you actually do is copying already checked in code to another directory in the repository. I suppose one could copy the main branch to another location, and then change the working copy repository location pointer to point at that location, and then commit the experimental code. But that seems a bit long-winded. Is this really how you do it?

    Read the article

  • How do I create a new folder and deploy files to the 12 hive using VseWSS 1.3?

    - by Nathan DeWitt
    I have created a web part using VSeWSS 1.3. It creates a wsp file and my web part gets installed, everything works great. I would like to also create a folder in the LAYOUTS directory of the 12 hive and place a couple files in there. How do I go about doing this? I know that I can manually place the files there, but I would prefer to have it all done in one fell swoop when I uses stsadm to install my solution. Is there a best practices guide out there for using VSeWSS 1.3 to do this? They changed a bunch of stuff with this new version and I want to make sure I don't mess anything up.

    Read the article

  • UIImagepickerController strange bug

    - by Rahul Vyas
    Hi, I'm doing something with UIImagePickerController. It works fine and the Picture browser does open, however, I get this message. "Failed to save the videos metadata to the filesystem. Maybe the information did not conform to a plist." What could be causing that? That is caused by this line [self presentModalViewController:self.imgPicker animated:YES]; which is activated on a button click Snippets of the code that I have below. - (void)viewDidLoad { self.imgPicker = [[UIImagePickerController alloc] init]; self.imgPicker.allowsImageEditing = YES; self.imgPicker.delegate = self; } - (IBAction)grabImage { [self presentModalViewController:self.imgPicker animated:YES]; } it's also removing images from documents directory and from resources folder. I can not understand why? does someone found the solution to this?

    Read the article

  • Can't read from aspnet_client folder for crystal reports

    - by Hank Allen
    I created a little ASP .Net app to run Crystal Reports. It runs fine from VS 2008 but not when deployed to IIS on Windows 7. The toolbar images are not rendered. The problem seems to be I can't read from the aspnet_client folder even though I've made into a virtual directory. I can't even read images I put in there just to see if the folder can be read from an ASP page. I also made sure the IIS user can read from there. I'm stumped.

    Read the article

  • Sitecollection Overview Page

    - by ronischuetz
    I have the following situation: MOSS 2007 Server Environment A - Intranet MOSS 2007 Server Environment B - Collaboration Environment (approx. 150 site collections for various issues) Both environments are on different infrastructures but we use the same Active Directory and the same groups. Now we would like to implement the following 2 things: An overview page within the intranet with all available site collections on environment b. An overview page within the intranet with only those site collections the user has access on. now i'm searching for some good ideas what would be the best way to realise something like this. thanks in advance for any response.

    Read the article

  • how to invoke onclick function in html from vb.net or C#

    - by vbNewbie
    I am trying to invoke the onclick function in an html page that displays content. I am using the httpwebreqest control and not a browser control. I have traced the function and tried to find the link it calls but looking at the code below I tried inserting the link into the browser with the main url but it does not work. <div style="position:relative;" id="column_container"> <a href="#" onclick=" if (! loading_next_page) { loading_next_page = true; $('loading_recs_spinner').style.visibility = 'visible'; **new Ajax.Request('/recommendations?directory=non-profit&page=**' + next_page, { onComplete: function(transport) { if (200 == transport.status){ $('column_container').insert({ bottom: transport.responseText }); loading_next_page = false; $('loading_recs_spinner').style.visibility = 'hidden'; next_page += 1; if (transport.responseText.blank()) $('show_more_recs').hide(); } } }); } return false; Any ideas would be deeply appreciated.

    Read the article

  • Please help properly setting up path variables for root.php

    - by Joel
    Hi guys, I just posted a similar question, but deleted it because I realized I was working with an old file...doh! I am just trying to get my XAMPP setup working for me. I have a live site that navigates to a login page at http://www.monkeycalendar.com/arvindkt/login.php That login page includes a root.php file that is found at http://www.monkeycalendar.com/arvindkt/root.php Live site works great. My localhost is set up so my sites are a folder in localhost: IE: http://www.example.com = localhost/example.com I'm having problems figuring out how to make my root folder point to the right directory. Any help would be much appreciated: root.php: # local settings define("SITE_ROOT" , $_SERVER['DOCUMENT_ROOT']."/arvindkt"); define("SITE_URL" , "http://localhost/monkeycalendar.com"); define('DB_HOST', "localhost"); define('DB_USER', "root"); define('DB_PASS', ""); define('DB_NAME', "dev.monkeycalendar");

    Read the article

  • Open Source Embedded Database Options for .Net Applications

    - by LonnieBest
    I have a .Net 4.0 WPF application that requires an embedded database. MS Access doesn't work on 64 bit Windows I'm told. And I having issues with SSCE: Unable to load DLL 'sqlceme35.dll': This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. (Exception from HRESULT: 0x800736B1) sqlceme35.dll is installed into my application's program files directory, so I can't seem to figure out why Windows XP Pro doesn't see it. I was wondering about other embedded databases options I might use that work on both 32 and 64 bit windows. Any suggestions?

    Read the article

  • Run another version of Python using virtualenv

    - by mazlor
    I apologize in advance if the question is dummy ,i use Python 3.2.3 on Windows xp ,now i need Python3.3.2 , but i can't remove Python 3.2.3 because i have many codes and packages need to be run by it. I installed virtualenv to run two versions of Python in two different environments , but after that i didn't know what to do to run a code using Python 3.3.2 , here what i did: C:\>virtualenv.exe env1 C:\>env1\Scripts\activate now i don't know what to do after a folder was created its name env1 , i downloaded Python 3.3.2 and installed it in the same folder (env1) , is that correct ? then i try the following: (env1) C:\>python3.3.2 I got the following : 'python3.3.2' is not recognized as an internal or external command, operable program or batch file. also i tried : (env1) C:\>python python33 I got the following: python: can't open file 'python33': [Errno 2] No such file or directory As i mentioned , i stuck at this point , any help will be very appreciated. Thanks

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • Exposing.NET assembly as COM 101

    - by Jan Zich
    I have trouble to expose a .NET assembly in COM. It seems that I must be missing some basic step because I think I followed all tutorials and documentation I found as well as common sense, but still when I do (in a test VBScript): Set o = CreateObject("MyLib.MyClass") It keeps saying that the object cannot be created. Here are the steps I have done: I have simple one method dummy class with no attributes. The class is in a class library which has "Make assembly COM-visible" ticked in Visual Studio. The class library is signed. The DLL is registered via RegAsm.exe with the /codebase parameter (I don’t want / cannot add the DLL to GAC). Just to be sure, I tried to copy the library to the same directory as the test VBScript, but it does not help. Edit: I should have mentioned that the I can instantiate the class in COM if I put the DLL into GAC.

    Read the article

  • Symfony MarkDown

    - by Rui Gonçalves
    Hi there! I'm trying to add some MarkDown capabilities to my symfony project (symfony version 1.3.3). To accomplish that, I had already included the MarkDown library into lib/vendor directory. Also, I added the need configuration in the autoload.yml for the previous library. However, I'm getting a fatal PHP error: Call to undefined function Markdown(). How can I resolve this problem? Thanks in advance for all the help, Best regards!

    Read the article

  • Subsume external library into source tree with Autotools.

    - by troutwine
    I am developing a new project, using Autotools for my build infrastructure. I would like to subsume external dependencies into my source tree. These dependencies are using Autotools, as well. How can I configure my project's build scripts to build and link against subsumed dependencies? Though Duret-Lutz's tutorial is excellent, this situation is only briefly addressed in a few slides. I found his explanation deeply confusing. By adding the directory name of subsumed dependencies to the toplevel Makefile.am's SUBDIRS the dependency is being configured and built. It is possible to manually set include paths through CFLAGS, but how do I link against libtool .la files?

    Read the article

  • GoogleAppEngine : JAR for enhancer not found, yet ASM is on the classpath

    - by James.Elsey
    When deploying my application to GoogleAppEngine I'm getting the following message after the upload Exception in thread "Thread-0" You have selected to use ClassEnhancer "ASM" yet the JAR for that enhancer does not seem to be in the CLASSPATH! org.datanucleus.enhancer.NucleusEnhanceException: You have selected to use ClassEnhancer "ASM" yet the JAR for that enhancer does not seem to be in the CLASSPATH! at org.datanucleus.enhancer.DataNucleusEnhancer.init(DataNucleusEnhancer.java:212) at org.datanucleus.enhancer.DataNucleusEnhancer.addClasses(DataNucleusEnhancer.java:370) at org.datanucleus.enhancer.EnhancerProcessor$EnhanceRunnable.run(EnhancerProcessor.java:163) at java.lang.Thread.run(Thread.java:636) I'm not sure why this is happening, since I have the following in my POM <dependency> <groupId>asm</groupId> <artifactId>asm</artifactId> <version>3.2</version> </dependency> Which is required by GAE, and the ASM jar is located in the target directory, so I'm failing to see what the issue is Any ideas? [james@nevada gae-deployment]$ ls target/salestracker/WEB-INF/lib/asm* target/salestracker/WEB-INF/lib/asm-3.2.jar

    Read the article

  • startx doesnt run gives an error xauth unable to link authority files

    - by Sandeep
    I have installed windows xp on VPC and have installed cygwin-x on that virtual machine. When i run startx command. I get the following error: xauth: creating new authority file /home/Administrator/.serverauth.1480 xauth: unable to link authority file /home/Administrator/.serverauth.1480, use /home/Administrator/.serverauth.1480-n xauth: creating new authority file /home/Administrator/.Xauthority xauth: creating new authority file /home/Administrator/.Xauthority xauth: unable to link authority file /home/Administrator/.Xauthority, use /home /Administrator/.Xauthority-n xauth: creating new authority file /home/Administrator/.Xauthority xauth: creating new authority file /home/Administrator/.Xauthority xauth: unable to link authority file /home/Administrator/.Xauthority, use /home /Administrator/.Xauthority-n giving up. xinit: No such file or directory (errno 2): unable to connect to X server xinit: No such process (errno 3): Server error. xauth: creating new authority file /home/Administrator/.Xauthority

    Read the article

  • how to read a User uploaded file, without saving it to the database

    - by GoodGets
    I'd like to be able to read an XML file uploaded by the user (less than 100kb), but not have to first save that file to the database. I don't need that file past the current action (its contents get parsed and added to the database; however, parsing the file is not the problem). Since local files can be read with: File.read("export.opml") I thought about just creating a file_field for :uploaded_file, then trying to read it with File.read(params[:uploaded_file]) but all that does is throw a TypeError (can't convert HashWithIndifferentAccess into String). I really have tried a lot of various things (including reading from the /tmp directory as well), but could get none of them to work. I hope the brevity of my question doesn't mask the effort I've given to try to solve this on my own, but I didn't want to pollute this question with a hundred ways of how NOT to get it done. Big thanks to anyone who chimes in.

    Read the article

  • setting library include paths in c++

    - by Drew
    Hi all, I just installed gd2 using mac ports (sudo install gd2), which installed libraries in the following places: /opt/local/include/gd.h /opt/local/lib/libgd.dylib (link) /opt/local/lib/libgd.la /opt/local/lib/libgd.a So when I create my c++ app I add '#include "gd.h"', which throws: main.cpp:4:16: error: gd.h: No such file or directory If I set gd.h as an absolute path (as above)(not a solution, but was curious), I am thrown: g++ -L/opt/local/include -L/opt/local/lib main.o Heatmap_Map.o Heatmap_Point.o -o heatmap Undefined symbols: "_gdImagePng", referenced from: _main in main.o "_gdImageLine", referenced from: _main in main.o "_gdImageColorAllocate", referenced from: _main in main.o _main in main.o "_gdImageDestroy", referenced from: _main in main.o "_gdImageCreate", referenced from: _main in main.o "_gdImageJpeg", referenced from: _main in main.o ld: symbol(s) not found So, I understand this means that ld can not find the libraries it needs (hence trying to give it hints with the "-L" values). So after giving g++ the -L hints and the absolute path in #include, I can get it to work, but I don't think I have to do this, how can I make g++/ld search int eh right places for the libraries? Drew J. Sonne.

    Read the article

  • Dell Studio 1737 Overheating

    - by Sean
    I am using a Dell Studio 1737 laptop. I have been running Linux and have ran Windows recently for a very long time. I upgraded to the 10.10 distribution and since that distro, it seems that for some reason all Linuxes want to push my laptop to extremes. I have recently upgraded to Ubuntu 12.04 since I heart that it contains kernel fixes for overheating issues. 12.04 will actually eventually cool the system, but that is after the fans run to the point it sounds like a jet aircraft taking off and the laptop makes my hands sweat. In trying to combat the heat problems I have done the following: I installed the propriatery driver for my ATI Mobility HD 3600. I have tried both the one in the Additional Drivers and also tried ATI's latest greatest version. If I don't install this my laptop will overheat and shut off in minutes. Both seem to perform similarly, but the heat problem remains. I have tried limiting the CPU by installing the CPUFreq Indicator. This does help keep the machine from shutting off, but the heat is still uncomfortable to be around the machine. I usually run in power saver mode or run the cpu at 1.6 GHZ just to error on safety. I ran sensors-detect and here are the results: sean@sean-Studio-1737:~$ sudo sensors-detect # sensors-detect revision 5984 (2011-07-10 21:22:53 +0200) # System: Dell Inc. Studio 1737 (laptop) # Board: Dell Inc. 0F237N This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): y Module cpuid loaded successfully. Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No AMD Family 15h thermal sensors... No AMD Family 15h power sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): y Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor/ITE'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor/ITE'... Yes Found `ITE IT8512E/F/G Super IO' (but not activated) Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (YES/no): y Probing for `National Semiconductor LM78' at 0x290... No Probing for `National Semiconductor LM79' at 0x290... No Probing for `Winbond W83781D' at 0x290... No Probing for `Winbond W83782D' at 0x290... No Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): y Using driver `i2c-i801' for device 0000:00:1f.3: Intel ICH9 Module i2c-i801 loaded successfully. Module i2c-dev loaded successfully. Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO)y Successful! Monitoring programs won't work until the needed modules are loaded. You may want to run 'service module-init-tools start' to load them. Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK sean@sean-Studio-1737:~$ sudo service module-init-tools start module-init-tools stop/waiting I also tried installing i8k but that didn't work since it didn't seem to be able to communicate with the hardware (probably for different kind of device). Also I ran acpi -V and here are the results: Battery 0: Full, 100% Battery 0: design capacity 613 mAh, last full capacity 260 mAh = 42% Adapter 0: on-line Thermal 0: ok, 49.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 1: ok, 48.0 degrees C Thermal 1: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 2: ok, 51.0 degrees C Thermal 2: trip point 0 switches to mode critical at temperature 100.0 degrees C Cooling 0: LCD 0 of 15 Cooling 1: Processor 0 of 10 Cooling 2: Processor 0 of 10 I have hit a wall and don't know what to do now. Any advice is appreciated.

    Read the article

  • How to find the physical path of a GSP file in a deployed grails application

    - by Deepak Mittal
    I need to find out the physical path of a grails GSP file. My requirement is that I want to create a new layout file at run-time and use that in the application. I have been able to achieve this without problem when the application runs on jetty (grails run-app), however, when I deploy the app on Jboss, the path at which the file needs to be created changes. So, ideally I would like to find out at runtime using some magical utility the path of a particular GSP (lets say main.gsp layout file) and I need to create my new layout in the same directory in which main.gsp reside. Any pointers? -Deepak

    Read the article

< Previous Page | 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627  | Next Page >