Search Results

Search found 59256 results on 2371 pages for 'data compression'.

Page 426/2371 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Please help with 'System.Data.DataRowView' does not contain a property with the name...

    - by Catalin
    My application works just fine until, in some unidentified conditions, it begins throwing this errors: When these errors start appearing, they appear in all over the application, no matter the code is built using ObjectDataSource or SqlDataSource. However, apparently, if the code uses some "classic", code-behind data-binding, that code does not throw errors (they do not show in the application event log), the page is displayed but it does not contain any data.... Please HELP tracking these very strange errors !!!!!! Here is the error log from the Event Viewer Exception information: Exception type: HttpUnhandledException Exception message: Exception of type 'System.Web.HttpUnhandledException' was thrown. Request information: Request URL: http://SITE/agsis/Default.aspx?tabid=1281&error=DataBinding5/7/2009 10:23:03 AMa+'System.Data.DataRowView'+does+not+contain+a+property+with+the+name+'DenumireMaterie'. Request path: /agsis/Default.aspx User host address: 299.299.299.299 ;) User: georgeta Is authenticated: True Authentication Type: Forms Thread account name: NT AUTHORITY\NETWORK SERVICE Thread information: Thread ID: 1 Thread account name: NT AUTHORITY\NETWORK SERVICE Is impersonating: False Stack trace: at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.errorpage_aspx.ProcessRequest(HttpContext context) at System.Web.HttpServerUtility.ExecuteInternal(IHttpHandler handler, TextWriter writer, Boolean preserveForm, Boolean setPreviousPage, VirtualPath path, VirtualPath filePath, String physPath, Exception error, String queryStringOverride) Here is the error log from my application: AssemblyVersion: 04.05.01 PortalID: 18 PortalName: Stiinte Economice UserID: 85 UserName: georgeta ActiveTabID: 1281 ActiveTabName: Plan Invatamant RawURL: /agsis/Default.aspx?tabid=1281&error=DataBinding%3a+'System.Data.DataRowView'+does+not+contain+a+property+with+the+name+'DenumireMaterie'. AbsoluteURL: /agsis/Default.aspx AbsoluteURLReferrer: http://SITE/agsis/Default.aspx?tabid=1281&ctl=Login&returnurl=%2fagsis%2fDefault.aspx%3ftabid%3d1281%26ctl%3dNotePlanSemestru%26mid%3d2801%26ID_PlanSemestru%3d518%26ID_PlanInvatamant%3d304%26ID_FC%3d206%26ID_FCForma%3d85%26ID_Domeniu%3d418%26ID_AnStudiu%3d7%26ID_Specializare%3d6522%26ID_AnUniv%3d27 UserAgent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; SIMBAR={8C326639-C677-4045-B6D9-F59C330790E7}) DefaultDataProvider: DotNetNuke.Data.SqlDataProvider, DotNetNuke.SqlDataProvider ExceptionGUID: c56dc33e-973f-467a-a90d-2a8fc7193aec InnerException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'ID_PlanInvatamant'. FileName: FileLineNumber: 0 FileColumnNumber: 0 Method: System.Web.UI.DataBinder.GetPropertyValue StackTrace: Message: DotNetNuke.Services.Exceptions.PageLoadException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'ID_PlanInvatamant'. --- System.Web.HttpException: DataBinding: 'System.Data.DataRowView' does not contain a property with the name 'ID_PlanInvatamant'. at System.Web.UI.DataBinder.GetPropertyValue(Object container, String propName) at System.Web.UI.WebControls.GridView.CreateChildControls(IEnumerable dataSource, Boolean dataBinding) at System.Web.UI.WebControls.CompositeDataBoundControl.PerformDataBinding(IEnumerable data) at System.Web.UI.WebControls.GridView.PerformDataBinding(IEnumerable data) at System.Web.UI.WebControls.DataBoundControl.OnDataSourceViewSelectCallback(IEnumerable data) at System.Web.UI.WebControls.DataBoundControl.PerformSelect() at System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() at System.Web.UI.WebControls.CompositeDataBoundControl.CreateChildControls() at System.Web.UI.Control.EnsureChildControls() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- Source: Server Name: SERVER2 Thank you, Catalin

    Read the article

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • Smile sort son « Guide Michelin » 2013 de l'Open-Source, le Livre Blanc gratuit s'enrichit de rubriques sur le Cloud et le Big Data

    Smile sort son « Guide Michelin » 2013 de l'Open-Source Le Livre Blanc gratuit s'enrichit de rubriques sur le Cloud et le Big Data Pour l'édition 2013 de son Guide de référence sur l'open source, Smile a enrichit son Livre Blanc (285 pages) d'une trentaine de nouvelles solutions et de deux nouvelles rubriques (Cloud et Big Data). Plus 300 solutions y sont recensées (dont 200 évaluées dans le détail) dans plus de 40 domaines d'applications, répartis en trois « dimensions » (Infrastructure, Développement et couches intermédiaires, Applications). Le livre se présente sous la forme de fiches de présentation (version du produit, site web, aute...

    Read the article

  • Ensure your view and function meta data is upto date.

    - by simonsabin
    You will see if you use views and functions that SQL Server holds the rowset metadata for this in system tables. This means that if you change the underlying tables, columns and data types your views and functions can be out of sync. This is especially the case with views and functions that use select * To get the metadata to be updated you need to use sp_refreshsqlmodule. This forces the object to be “re run” into the database and the meta data updated. Thomas mentioned sp_refreshview which is a...(read more)

    Read the article

  • Can I change a linux swap partition to a file system without losing data?

    - by Diogo
    so I am using ubuntu 11.10, just changed from windows to ubuntu, and one of my file systems, I accidentally set it up as a linux swap partition and now I want to acces my files, as I did in windows, but I do not know any secure way to change that partition to a file system without formating it and loosing the data. Because I do not want to loose the data I have there. Does deleting the partition and mounting it again solve the problem?

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Why do my Application Compatibility Toolkit Data Collectors fail to write to my ACT Log Share?

    - by Jay Michaud
    I am trying to get the Microsoft Application Compatibility Toolkit 5.6 (version 5.6.7320.0) to work, but I cannot get the Data Collectors to write to the ACT Log Share. The configuration is as follows. Machine: ACT-Server Domain: mydomain.example.com OS: Windows 7 Enterprise 64-bit Edition Windows Firewall configuration: File and Printer Sharing (SMB-In) is enabled for Public, Domain, and Private networks ACT Log Share: ACT Share permissions*: Group/user names Allow permissions --------------------------------------- Everyone Full Control Administrator Full Control Domain Admins Full Control Administrators Full Control ANONYMOUS LOGON Full Control Folder permissions*: Group/user name Allow permissions Apply to ------------------------------------------------- ANONYMOUS LOGON Read, write & execute This folder, subfolders, and files Domain Admins Full control This folder, subfolders, and files Everyone Read, write & execute This folder, subfolders, and files Administrators Full control This folder, subfolders, and files CREATOR OWNER Full control Subfolders and files SYSTEM Full control This folder, subfolders, and files INTERACTIVE Traverse folder / This folder, subfolders, and files execute file, List folder / read data, Read attributes, Read extended attributes, Create files / write data, Create folders / append data, Write attributes, Write extended attributes, Delete subfolders and files, Delete, Read permissions SERVICE (same as INTERACTIVE) BATCH (same as INTERACTIVE) *I am fully aware that these permissions are excessive, but that is beside the point of this question. Some of the clients running the Data Collector are domain members, but some are not. I am working under the assumption that this is a Windows file sharing permission issue or a network access policy issue, but of course, I could be wrong. It is my understanding that the Data Collector runs in the security context of the SYSTEM account, which for domain members appears on the network as MYDOMAIN\machineaccount. It is also my understanding from reading numerous pieces of documentation that setting the ANONYMOUS LOGON permissions as I have above should allow these computer accounts and non-domain-joined computers to access the share. To test connectivity, I set up the Windows XP Mode virtual machine (VM) on ACT-Server. In the VM, I opened a command prompt running as SYSTEM (using the old "at" command trick). I used this command prompt to run explorer.exe. In this Windows Explorer instance, I typed \ACT-Server\ACT into the address bar, and then I was prompted for logon credentials. The goal, though, was not to be prompted. I also used the "net use /delete" command in the command prompt window to delete connections to the ACT-Server\IPC$ share each time my connection attempt failed. I have made sure that the appropriate exceptions are Since ACT-Server is a domain member, the "Network access: Sharing and security model for local accounts" security policy is set to "Classic - local users authenticate as themselves". In spite of this, I still tried enabling the Guest account and adding permissions for it on the share to no effect. What am I missing here? How do I allow anonymous logons to a shared folder as a step toward getting my ACT Data Collectors to deposit their data correctly? Am I even on the right track, or is the issue elsewhere?

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • What could be the best way to generalize data from Facebook and Twitter?

    - by Sjaak van der Heide
    I am not sure if this is the best subsite to ask this question, but I'm pretty sure it doesn't fit on the normal or facebook SO page... I've been asked to make a general API for connecting to several Social Media platforms (at the moment Facebook and Twitter). I have already realised both of them seperately. Meaning I retrieve the data I need from both Facebook and Twitter and hold the data in it's own dataclass. In my case a list of FacebookTimelineItems and a list of TwitterTimelineItems. now the hard part is taking the parts that are used in both (username, id, message and such) and make 1 general class that is eventually passed on to who/whatever sent the call to my API. these are two pics of the data classes I have: http://imageshack.us/photo/my-images/703/facebookdata.png/ http://imageshack.us/photo/my-images/204/twitterdata.png/ probably not 100% correct but it gives an idea what it looks like. Now I've been having several idea about how to go about and generalize the two, which is harder then I thought at first. Create an interface (TimelineItem) and let the other classes extend that one. this way I'll always be sure I have a class that contains at least the basic info I need. downside is that deserializing the JSON seems to be a nightmare. Use the two dataclasses I have and combine them into a new class afterwards, then pass that one back to whoever requested it. This would probably work but I get the idea it's not the best way to tackle this problem, and is pretty dodgy IF I get it working. Or, in case of the other two being nearly impossible. Keep the two seperated in the front end, and go sit in the corner crying because I've just figured out you can't lump together facebook and twitter... Note: I don't have to make the front end part (view), I just make sure the Model is nicely filled with data :) I hope I placed this in the right section, if I didn't I apologise and would like to know where I should go with my question. Thanks in advance for any replied/ideas/opinions on this.

    Read the article

  • How to save and retrieve data as key-value pairs or files in isolated storage?

    - by kaleidoscope
    One can use isolated storage to store data locally on the user's computer. There are two ways to use isolated storage. The first way is to save or retrieve data as key/value pairs by using the IsolatedStorageSettings class. The second way is to save or retrieve files by using the IsolatedStorageFile class. More details can be found at http://silverlight.net/learn/quickstarts/isolatedstorage/   Rituraj, J

    Read the article

  • When does Information become Data? (i.e. Information wants to be free) [closed]

    - by James P. Wright
    I hear Programmers often talk about how Information Wants To Be Free which I mostly agree with, but the thing that people don't often pay attention to is that Information and Data are not the same thing. Should Data also be free? Does that mean all of you should have full access to my Social Security Number and other personal "information"? Where is the limit? If there is a limit, why do people throw this phrase around like it fits every circumstance (like this one)

    Read the article

  • Google ou le data warehouse mondial : Partage de connaissance ou possession du marché mondiale ?

    Google ou le data warehouse mondiale Partage de connaissance ou possession du marché mondiale ? Google a annoncé la mise en ligne de données supplémentaires nommé World Bank sur son outil public data explorer Via cet outil, vous trouvez toutes les informations mondiales concernant l'agriculture, la consommation électrique par capital, l'émission de CO2 par capital, le nombre d'utilisateurs d'internet,... Ainsi, vous avez par différents graphiques, des statistiques sur toutes les capitales dont vous pourrez comparer les différentes informations propres à ...

    Read the article

  • Recovering data from failed Raid configuration with 4 drives and two raid sets (Asus P6T / Intel ICH10r)

    - by user56365
    I've added the complete detailed version for my question below for those who can help, but want to quickly summarize my question first. I setup two Raid arrays using (4) WD Raptors, a striped set for the OS and 1+0 set for crucial data. After booting once out of the 50 times a cable fell out, the drive wasn't recognized in the array anymore. After trying to fix it, another drive did the same. I now have two drives remaining, luckily with the parity information. I know the striped set is gone, but I need the data on the other set. Can anyone recommend anything to recover the data, or fix the two drives that doesn't allow the raid controller to recognize the drives, even though they are listed on the utility screen as still apart of the configuration but that they are not found? More Details I recently upgraded to a ASUS P6T motherboard with an Intel ICH10R raid controller and changed my previous 4 drive raid array from strictly a Raid 1+0 set to a Raid 0 for the OS/Page/Scratch drive and a Raid 1+0 set for crucial data. I never had problems after upgrading with my configuration, even when a drive died and was replaced. I managed to rebuild the array fine. Unfortunately this time around, a cable came unattached and I booted my system up until the raid status screen with the degraded error. This shouldn't have been a problem, but after I attached the drive it was no longer recognized as a member in the array. Both drives actually show up as a non-member disk. I've spent a very, very long time online trying to find information or support and haven't had much luck. After spending time trying to scan the drive for errors, damaged partition info, etc.. another drive in the set decided it didn't want to be recognized as a part of the array. At this point, I have two out of the four drives still functioning, but the Raid 1+0 array went from degraded to failed and I must find a way to retrieve that data. I think the two drives still in the array have the parity information because they show up as OS (110GB),BACKUP(80GB) and OS:1(110GB),BACKUP(80GB) under windows data management. The other two are simply 74gb Raw unallocated Is it possible recover the data using those two drive only, and which tool would I use? Could it be a simple partition table or any other error that is repairable with hard drive utilities out there? I know the Raid 0 set is done for, but I would assume because the correct drives failed in a 1+0 config to save the data I can retrieve it some how.

    Read the article

  • How to install software that "Requires installation of untrusted packages"?

    - by user135682
    I am using Ubuntu 12.10. When I try to update the software, it shows me this error: Requires installation of untrusted packages This requires installing packages from unauthenticated sources. Details kalzium-data kanagram kate-data kde-l10n-engb kde-l10n-zhcn kde-runtime-data kdegames-data klettres-data libakonadi-kabc4 liblxc0 libwildmidi-config libwildmidi1 lxc marble-data nepomuk-core-data parley-data tomboy

    Read the article

  • Facebook sort Presto, son moteur de requêtes open source pour le big data, qui serait dix fois plus performant que celui de Hadoop

    Facebook sort Presto, son moteur de requêtes open source pour le big data qui serait dix fois plus performant que celui de HadoopDe nombreuses entreprises comme Facebook dépendent du Big data. Dans le domaine, on compte la paire Hadoop/Hive parmi les références. Pour rappel, Hive c'est le moteur de requêtes populaire pour Hadoop. Cependant, il se pourrait que le MapReduce élément essentiel sur lequel repose Hive ne soit pas optimisé pour des situations ou la quantité de données excède un certain...

    Read the article

  • Windows Server 2003 - Are ODBC Data Source's set per-user?

    - by Jakobud
    When I'm logged into our Windows Server 2003 server, I don't see any ODBC Data Sources, but when a different user logs in (who doesn't have Administrative rights), they have a big list of ODBC Data Sources. Are ODBC Data Sources set on a per-user basis? How come the Administrator can't see user's ODBC Data Sources?

    Read the article

  • Les Bouches-du-Rhône sponsorisent des développeurs pour promouvoir l'open data : avez-vous proposé votre application ?

    Les Bouches-du-Rhône lancent une opération de sponsoring pour les développeurs d'applications et de sites web Afin de promouvoir l'open-data et la région, Developpez.com partenaire Depuis le mois d'avril, l'association Bouches-du-Rhône Tourisme a rejoint la démarche Open-data en ouvrant son portail et en libérant une centaine de jeux de données liées au tourisme dans le département. Toutes les informations détenues par l'association ont été rendues disponibles et exploitables par les développeurs, les scientifiques, les associations, les étudiants et les entreprises. Bref, par tout le monde. On y trouve par exemple...

    Read the article

  • Les Bouches-du-Rhône lancent une opération de sponsoring de développeurs pour promouvoir l'open-data et la région

    Les Bouches-du-Rhône lancent une opération de sponsoring pour les développeurs d'applications et de sites web Afin de promouvoir l'open-data et la région, Developpez.com partenaire Depuis le mois d'avril, l'association Bouches-du-Rhône Tourisme a rejoint la démarche Open-data en ouvrant son portail et en libérant une centaine de jeux de données liées au tourisme dans le département. Toutes les informations détenues par l'association ont été rendues disponibles et exploitables par les développeurs, les scientifiques, les associations, les étudiants et les entreprises. Bref, par tout le monde. On y trouve par...

    Read the article

  • Should I be relying on WebTests for data validation?

    - by Alexander Kahoun
    I have a suite of web tests created for a web service. I use it for testing a particular input method that updates a SQL Database. The web service doesn't have a way to retrieve the data, that's not its purpose, only to update it. I have a validator that validates the response XML that the web service generates for each request. All that works fine. It was suggested by a teammate that I add data validation so that I check the database to see the data after the initial response validator runs and compare it with what was in the input request. We have a number of services and libraries that are separate from the web service I'm testing that I can use to get the data and compare it. The problem is that when I run the web test the data validation always fails even when the request succeeds. I've tried putting the thread to sleep between the response validation and the data validation but to no avail; It always gets the data from before the response validation. I can set a break point and visually see that the data has been updated in the DB, funny thing is when I step through it in debug with the breakpoint it does validate successfully. Before I get too much more into this issue I have to ask; Is this the purpose of web tests? Should I be able to validate data through service calls in this manner or am I asking too much of a web test and the response validation is as far as I should go?

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • How can I 'transpose' my data using SQL and remove duplicates at the same time?

    - by Remnant
    I have the following data structure in my database: LastName FirstName CourseName John Day Pricing John Day Marketing John Day Finance Lisa Smith Marketing Lisa Smith Finance etc... The data shows employess within a business and which courses they have shown a preference to attend. The number of courses per employee will vary (i.e. as above, John has 3 courses and Lisa 2). I need to take this data from the database and pass it to a webpage view (asp.net mvc). I would like the data that comes out of my database to match the view as much as possible and want to transform the data using SQl so that it looks like the following: LastName FirstName Course1 Course2 Course3 John Day Pricing Marketing Finance Lisa Smith Marketing Finance Any thoughts on how this may be achieved? Note: one of the reasons I am trying this approach is that the original data structure does not easily lend itself to be iterated over using the typical mvc syntax: <% foreach (var item in Model.courseData) { %> Because of the duplication of names in the orignal data I would end up with lots of conditionals in my View which I would like to avoid. I have tried transforming the data using c# in my ViewModel but have found it tough going and feel that I could lighten the workload by leveraging SQL before I return the data. Thanks.

    Read the article

  • C#/.Net Error: MySql.Data Object reference not set to an instance of an object.

    - by Simon
    Hello, I get this exception on my Windows 7 64bit in application running in VS 2008 express. I am using Connector/Net 6.2.2.0: Message: Object reference not set to an instance of an object. Source: MySql.Data in MySql.Data.MySqlClient.NativeDriver.GetResult(Int32& affectedRow, Int32& insertedId) Stack trace: in MySql.Data.MySqlClient.Driver.GetResult(Int32 statementId, Int32& affectedRows, Int32& insertedId) in MySql.Data.MySqlClient.Driver.NextResult(Int32 statementId) in MySql.Data.MySqlClient.MySqlDataReader.NextResult() in MySql.Data.MySqlClient.MySqlDataReader.Close() in MySql.Data.MySqlClient.MySqlConnection.Close() in MySql.Data.MySqlClient.MySqlConnection.Dispose(Boolean disposing) in System.ComponentModel.Component.Finalize() No inner exception. This exception is unhalted and the debugger dont point on any code line. It just say "Object reference not set to an instance of an object. MySql.Data" This error is really hard to repeat. On my Windows XP 32bit is all ok. Could it be error in 64bit Windows 7? Thank you very much for your answers. Regards, simon

    Read the article

  • PHP DomDocument class unable access domnode

    - by turbod
    Hi. I dont parse this url: http://foldmunka.net $ch = curl_init("http://foldmunka.net"); //curl_setopt($ch, CURLOPT_NOBODY, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); //curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here) $data = curl_exec($ch); $info = curl_getinfo($ch); curl_close($ch); clearstatcache(); if ($data === false) { echo 'cURL failed'; exit; } $dom = new DOMDocument(); $data = mb_convert_encoding($data, 'HTML-ENTITIES', "utf-8"); $data = preg_replace('/<\!\-\-\[if(.*)\]>/', '', $data); $data = str_replace('<![endif]-->', '', $data); $data = str_replace('<!--', '', $data); $data = str_replace('-->', '', $data); $data = preg_replace('@<script[^>]*?>.*?</script>@si', '', $data); $data = preg_replace('@<style[^>]*?>.*?</style>@si', '', $data); $data = mb_convert_encoding($data, 'HTML-ENTITIES', "utf-8"); @$dom->loadHTML($data); $els = $dom->getElementsByTagName('*'); foreach($els as $el){ print $el->nodeName." | ".$el->getAttribute('content')."<hr />"; if($el->getAttribute('title'))$el->nodeValue = $el->getAttribute('title')." ".$el->nodeValue; if($el->getAttribute('alt'))$el->nodeValue = $el->getAttribute('alt')." ".$el->nodeValue; print $el->nodeName." | ".$el->nodeValue."<hr />"; } I need the alt, title attributes and the simple text, but this page i cannot access the nodes within the body tag.

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >