Code Monkey home page Code Monkey logo

dbffile's Introduction

DBFFile

Summary

Read and write .dbf (dBase III and Visual FoxPro) files in Node.js:

  • Supported field types:
    • C (string)
    • N (numeric)
    • F (float)
    • Y (currency)
    • I (integer)
    • L (logical)
    • D (date)
    • T (datetime)
    • B (double)
    • M (memo) Note: memo support is experimental/partial, with the following limitations:
      • read-only (can't create/write DBF files with memo fields)
      • can only read dBase III (version 0x83), dBase IV (version 0x8b), and VFP9 (version 0x30)
  • 'Loose' read mode - tries to read any kind of .dbf file without complaining. Unsupported field types are simply skipped.
  • Can open an existing .dbf file
    • Can access all field descriptors
    • Can access total record count
    • Can access date of last update
    • Can read records using async iteration
    • Can read records in arbitrary-sized batches
    • Can include deleted records in results
    • Supports very large files
  • Can create a new .dbf file
    • Can use field descriptors from a user-specified object of from another instance
  • Can append records to an existing .dbf file
    • Supports very large files
  • Can specify character encodings either per-file or per-field.
    • the default encoding is 'ISO-8859-1' (also known as latin 1)
    • example per-file encoding: DBFFile.open(<path>, {encoding: 'EUC-JP'})
    • example per-field encoding: DBFFile.open(<path>, {encoding: {default: 'latin1', FIELD_XYZ: 'EUC-JP'}})
    • supported encodings are listed here.
  • All operations are asynchronous and return a promise

Installation

npm install dbffile or yarn add dbffile

Example: read all records in a .dbf file using for-await-of

import {DBFFile} from 'dbffile';

async function iterativeRead() {
    let dbf = await DBFFile.open('<full path to .dbf file>');
    console.log(`DBF file contains ${dbf.recordCount} records.`);
    console.log(`Field names: ${dbf.fields.map(f => f.name).join(', ')}`);
    for await (const record of dbf) console.log(record);
}

Example: reading a batch of records from a .dbf file

import {DBFFile} from 'dbffile';

async function batchRead() {
    let dbf = await DBFFile.open('<full path to .dbf file>');
    console.log(`DBF file contains ${dbf.recordCount} records.`);
    console.log(`Field names: ${dbf.fields.map(f => f.name).join(', ')}`);
    let records = await dbf.readRecords(100); // batch-reads up to 100 records, returned as an array
    for (let record of records) console.log(record);
}

Example: writing a .dbf file

import {DBFFile} from 'dbffile';

async function batchWrite() {
    let fieldDescriptors = [
        { name: 'fname', type: 'C', size: 255 },
        { name: 'lname', type: 'C', size: 255 }
    ];

    let records = [
        { fname: 'Joe', lname: 'Bloggs' },
        { fname: 'Mary', lname: 'Smith' }
    ];

    let dbf = await DBFFile.create('<full path to .dbf file>', fieldDescriptors);
    console.log('DBF file created.');
    await dbf.appendRecords(records);
    console.log(`${records.length} records added.`);
}

Loose Read Mode

Not all versions and variants of .dbf file are supported by this library. Normally, when an unsupported file version or field type is encountered, an error is reported and reading halts immediately. This has been a problem for users who just want to recover data from old .dbf files, and would rather not write a PR or wait for one that adds the missing file/field support.

A more forgiving approach to reading .dbf files is now provided by passing the option {readMode: 'loose'} to the DBFFile.open(...) function. In this mode, unrecognised file versions, unsupported field types, and missing memo files are all tolerated. Unsupported/missing field types are still present in the fields field descriptors, but will be missing in the record data returned by the readRecords(...) method.

API

The module exports the DBFFile class, which has the following shape:

/** Represents a DBF file. */
class DBFFile {

    /** Opens an existing DBF file. */
    static open(path: string, options?: OpenOptions): Promise<DBFFile>;

    /** Creates a new DBF file with no records. */
    static create(path: string, fields: FieldDescriptor[], options?: CreateOptions): Promise<DBFFile>;

    /** Full path to the DBF file. */
    path: string;

    /** Total number of records in the DBF file (NB: includes deleted records). */
    recordCount: number;

    /** Date of last update as recorded in the DBF file header. */
    dateOfLastUpdate: Date;

    /** Metadata for all fields defined in the DBF file. */
    fields: FieldDescriptor[];

    /** Reads a subset of records from this DBF file. The current read position is remembered between calls. */
    readRecords(maxCount?: number): Promise<object[]>;

    /** Appends the specified records to this DBF file. */
    appendRecords(records: object[]): Promise<DBFFile>;

    /** Iterates over each record in this DBF file. */
    [Symbol.asyncIterator](): AsyncGenerator<object>;
}

/** Metadata describing a single field in a DBF file. */
interface FieldDescriptor {

    /** The name of the field. Must be no longer than 10 characters. */
    name: string;

    /**
     * The single-letter code for the field type.
     * C=string, N=numeric, F=float, I=integer, L=logical, D=date, M=memo.
     */
    type: 'C' | 'N' | 'F' | 'Y' | 'L' | 'D' | 'I' | 'M' | 'T' | 'B';

    /** The size of the field in bytes. */
    size: number;

    /** The number of decimal places. Optional; only used for some field types. */
    decimalPlaces?: number;
}

/** Options that may be passed to `DBFFile.open`. */
interface OpenOptions {
    /**
     * The behavior to adopt when unsupported file versions or field types are encountered. The following values are
     * supported, with the default being 'strict':
     * - 'strict': when an unsupported file version or field type is encountered, stop reading the file immediately and
     *   issue a descriptive error.
     * - 'loose': ignore unrecognised file versions, unsupported field types, and missing memo files and attempt to
     *   continue reading the file. Any unsupported field types encountered will be present in field descriptors but
     *   missing from read records.
     */
    readMode?: 'strict' | 'loose'

    /** The character encoding(s) to use when reading the DBF file. Defaults to ISO-8859-1. */
    encoding?: Encoding;

    /**
     * Indicates whether deleted records should be included in results when reading records. Defaults to false.
     * Deleted records have the property `[DELETED]: true`, using the `DELETED` symbol exported from this library.
     */
    includeDeletedRecords?: boolean;
}

/** Options that may be passed to `DBFFile.create`. */
interface CreateOptions {

    /** The file version to create. Currently versions 0x03, 0x83, 0x8b and 0x30 are supported. Defaults to 0x03. */
    fileVersion?: FileVersion;

    /** The character encoding(s) to use when writing the DBF file. Defaults to ISO-8859-1. */
    encoding?: Encoding;
}

/**
 * Character encoding. Either a string, which applies to all fields, or an object whose keys are field names and
 * whose values are encodings. If given as an object, field keys are all optional, but a 'default' key is required.
 * Valid encodings may be found here: https://github.com/ashtuchkin/iconv-lite/wiki/Supported-Encodings
 */
type Encoding = string | {default: string, [fieldName: string]: string};

dbffile's People

Contributors

acdibble avatar dependabot[bot] avatar diegonc avatar jhrncar avatar khaos66 avatar kinolaev avatar lordrip avatar meilechwieder avatar merik-chen avatar paypacadam avatar troyvgw avatar wasenshi123 avatar workingacry avatar wseng avatar yortus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbffile's Issues

open client side

Hi, how i can to open dbf file in client side, without upload?

Search DBF by key

Hello.

I'm trying to get a specific record from the database, not the whole file. Is it possible to get, for example, column NAME whose ID = 10657? I say, something "SQL style". I am receiving a person's ID, and I want to know his card ID.

Thank you very much in advance.

Example doesn't work

var DBFFile = require('dbffile');

DBFFile.open('[full path to .dbf file]')
.then(function (dbf) {
  console.log('DBF file contains ' + dbf.recordCount + ' rows.');
  console.log('Field names: ' + dbf.fields.map(function (f) { return f.name; }).join(', '));
  return dbf.readRecords(100);
})
.then(function (rows) {
  rows.forEach(function (row) {
    console.log(row);
  });
})
.catch(function (err) {
  console.log('An error occurred: ' + err);
});

First callback never called.

Cannot update an specific field in a row.

Hi mate, I am having some troubles about updating a specific field in the row, please check this example and help me.

the file has one row, I only want to update this two specific field in the first row, I will be waiting for your comments

var rows = [{
ULTSOCIO:'3223',
ULTAHORROS: '8889'
}];

DBFFILE.open('NUMETRAN.DBF')
.then(dbf => {

dbf
return dbf.set(rows);

})
.catch(err => console.log('An error occurred: ' + err));

dbf-file.js appendRecordsToDBF sets value to empty space for DateTime field

I notice this line of code which cause issue for us when trying to insert a row into a dbf with empty datetime field. utils_2.formatVfpDateTime throws exception saying getTime is not a function which is correct when the datetime value is artificially set to empty space.

Can you consider making a code change to skip the datetime field when value is null? Thanks.

 // Write the records.
        for (let i = 0; i < records.length; ++i) {
            // Write one record.
            let record = records[i];
            validateRecord(dbf.fields, record);
            let offset = 0;
            buffer.writeUInt8(0x20, offset++); // Record deleted flag
            // Write each field in the record.
            for (let j = 0; j < dbf.fields.length; ++j) {
                // Get the field's value.
                let field = dbf.fields[j];
                let value = record[field.name];
                if (value === null || typeof value === 'undefined')
                    value = '';
                let encoding = getEncodingForField(field, dbf._encoding);
                // Encode the field in the buffer, according to its type.
                switch (field.type) {
                    case 'C': // Text
                        let b = iconv.encode(value, encoding);
                        for (let k = 0; k < field.size; ++k) {
                            let byte = k < b.length ? b[k] : 0x20;
                            buffer.writeUInt8(byte, offset++);
                        }
                        break;
                    case 'N': // Number
                    case 'F': // Float - appears to be treated identically to Number
                        value = value.toString();
                        value = value.slice(0, field.size);
                        while (value.length < field.size)
                            value = ' ' + value;
                        iconv.encode(value, encoding).copy(buffer, offset, 0, field.size);
                        offset += field.size;
                        break;
                    case 'L': // Boolean
                        buffer.writeUInt8(value ? 0x54 /* 'T' */ : 0x46 /* 'F' */, offset++);
                        break;
                    case 'T': // DateTime
                        const { julianDay, msSinceMidnight } = utils_2.formatVfpDateTime(value);

Long text fields breaking code

I had an issue with a super-old (from 90s) dbf file. Developer created a 256 characters long text field. I didn't notice the real issue since I was using loose read-mode.

I've just changed a few lines to walk around it. It's a little quick and dirty solution but it works for me. I'll post a pull request just in case anyone needs it.

Getting error (node:7468) UnhandledPromiseRejectionWarning: Error: Type '0' is not supported at Object.validateFieldDescriptor (C:\chatbot\node_modules\dbffile\dist\field-descriptor.js:16:15) at openDBF (C:\chatbot\node_modules\dbffile\dist\dbf-file.js:82:32)

As I am currently using big dbf file so I am getting error as

(node:7468) UnhandledPromiseRejectionWarning: Error: Type '0' is not supported
at Object.validateFieldDescriptor (C:\chatbot\node_modules\dbffile\dist\field-descriptor.js:16:15)
at openDBF (C:\chatbot\node_modules\dbffile\dist\dbf-file.js:82:32)

Why there is type 0??

Memo file not found for file '${path}'

Hi all,

thanks for the work so far.

If have an issue regarding case-sensititvity of input files:
Unfortuneatly I have 2 cases of filenames:

table1.dbf
table1.dbt

TABLE2.DBF
TABLE2.DBT

But I can't copy or mutate clients data.

Currently I just patched the package:

if (fileVersion === 0x83 || fileVersion === 0x8b) {
    memoPath = path.slice(0, -path_1.extname(path).length) + '.dbt';
    let isMemoFileMissing = await utils_1.stat(memoPath).catch(() => 'missing') === 'missing';
    if (isMemoFileMissing){
        memoPath = path.slice(0, -path_1.extname(path).length) + '.DBT';
        isMemoFileMissing = await utils_1.stat(memoPath).catch(() => 'missing') === 'missing';
    }
    if (isMemoFileMissing)
        memoPath = undefined;

Any better solutions?

Regards,
Acry

Missing the first character of value

Hi,
I'm really surprised that I found a dbf reader under nodejs. :) Thanks!
But when I tested I got wrong results some fields.
The first character missed. The DBF encoding is CP-852.
(I did not test to create or modify the records, because I need to read from old dbs only.)

{ NEV: 'FD', TARTALOM: 'iák munkaváll. 110120' } - the wrong result
{ NEV: 'F', TARTALOM: 'Diák munkaváll. 110120' } - i think it should be like this

Maybe the tabs or extra spaces in the field value cause the problem?
The console log results

Képernyőkép 2022-05-31 152034

Numeric with decimalPlaces

When amount has decimal places for example 175,50 the created dbf file is empty, if the amount is 175.00 the file is not empty.

Tried
{ name: 'amount', type: 'N', size: 19 },
{ name: 'amount', type: 'N', size: 19, decimalPlaces: 2 },

readRecords will occasionally throw a RangeError

I'm using this library to convert some old DBF data, and I'm running into an issue where readRecords(n) will throw RangeErrors. Currently I've just wrapped readRecords in a try/catch and only read one record at a time, which works, but naturally some data is lost.

Full stack trace:

RangeError [ERR_BUFFER_OUT_OF_BOUNDS]: Attempt to access memory outside buffer bounds
    at boundsError (internal/buffer.js:80:11)
    at Buffer.readInt32LE (internal/buffer.js:386:5)
    at int32At (/home/kurtis/Documents/anthology-converter/node_modules/dbffile/dist/dbf-file.js:264:72)
    at readRecordsFromDBF (/home/kurtis/Documents/anthology-converter/node_modules/dbffile/dist/dbf-file.js:345:35)
    at async Object.module.exports.convert (/home/kurtis/Documents/anthology-converter/src/converters/custconverter.js:31:17) {
  code: 'ERR_BUFFER_OUT_OF_BOUNDS'
}

Unfortunately I can't attach the files in question as they contain confidential/private data. However, I can share some metadata about the file:

  • Version: 48
  • # records: 23k
  • The file contains the Y (money) column type, which is currently unsupported

Thanks in advance for the help.

Issue using with node project.

I get the following error when trying to use the module

`App threw an error during load
C:\Users\desktop\testapp\main.js:12
import {DBFFile} from 'dbffile';
^^^^^^

SyntaxError: Cannot use import statement outside a module`

I am a beginner so I am sure this is user error. What is the proper way to import this project?

save file error

I get this error when I try to save a file.
An error occurred: TypeError: fs.openAsync is not a function
Here is my code

 DBFFile.create(`Archivo - ${moment().format(format)}.dbf`, fieldDescriptors)
          .then(dbf => {
              console.log('DBF file created.');
              return dbf.append(rows);
          })
          .then(() => console.log(rows.length + ' rows added.'))
          .catch(err => console.log('An error occurred: ' + err));

Problem with Node.js version 11

@yortus
when try to add this to project, it throws an error

import { DBFFile } from 'dbffile';

import { DBFFile } from 'dbffile';
^

SyntaxError: Unexpected token {
at new Script (vm.js:85:7)
at createScript (vm.js:266:10)
at Object.runInThisContext (vm.js:314:10)
at Module._compile (internal/modules/cjs/loader.js:698:28)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:749:10)
at Module.load (internal/modules/cjs/loader.js:630:32)
at tryModuleLoad (internal/modules/cjs/loader.js:570:12)
at Function.Module._load (internal/modules/cjs/loader.js:562:3)
at Module.require (internal/modules/cjs/loader.js:667:17)
at require (internal/modules/cjs/helpers.js:20:18)
[nodemon] app crashed - waiting for file changes before starting...

FoxPro9 Memo fields are 4 bytes in size

When trying to open a vfp9 DBF, which contains at least one field of type Memo, then this lib throws this error:
Invalid field size (must be 10)

The size 10 is hard-coded inside field-descriptor.ts

if (type === 'M' && size !== 10) throw new Error('Invalid field size (must be 10)');

I'm not sure if this is different from other dBase DBFs, but as far as I know vfp9 always uses the size of 4 bytes for Memo fields

If this is different, maybe there could be a dialect option?

Support stream?

Does the library support stream eg

const readStream = fs.createReadString('\path to dbf file")
readStream.pipe('process a row in the dbf file")

I have and error when i tried the example

index.ts:14:64 - error TS2345: Argument of type '{ name: string; type: string; size: number; }[]' is not assignable to parameter of type 'FieldDescriptor[]'.
Type '{ name: string; type: string; size: number; }' is not assignable to type 'FieldDescriptor'.
Types of property 'type' are incompatible.
Type 'string' is not assignable to type '"C" | "N" | "F" | "L" | "D" | "I" | "M" | "T" | "B"'.

14 let dbf = await DBFFile.create('<full path to .dbf file>', fieldDescriptors);

when i append record including chinese get a error says TypeError: "value" argument is out of bounds

let fieldDescriptors = [
        {
            name: 'C_ID',
            type: 'C',
            size: 16,
        },
        {
            name: 'C_HM',
            type: 'C',
            size: 64,
        }, {
            name: 'C_DZ',
            type: 'C',
            size: 64,
        },
        {
            name: 'N_Y',
            type: 'N',
            size: 4,
        },
        {
            name: 'N_M',
            type: 'N',
            size: 4,
        },
        {
            name: 'D_CB',
            type: 'D',
            size: 8,
        },
        {
            name: 'N_BCCM',
            type: 'N',
            size: 8,
        },
        {
            name: 'N_SCCM',
            type: 'N',
            size: 8,
        },
        {
            name: 'I_CBBZ',
            type: 'N',
            size: 1,
        }
    ]
{
                C_ID: '',
                C_HM: item.customerNumber,
                C_DZ: String("测试"),
                N_Y: Number(dateformat(new Date(item.logDateTime), 'yyyy')),
                N_M: Number(dateformat(new Date(item.logDateTime), 'mm')),
                D_CB: new Date(item.logDateTime),
                N_BCCM: Number((item.number/1000).toFixed()) || 0,
                N_SCCM: 0,
                I_CBBZ: 1
            }

Unsupported version

Hi.

I'm really interested in your parser, because it allows to define each field length individually. I need to append new registers to an existing database. However, the parser gives me the following message: "AssertionError: File 'FILENAME.DBF' has unknown/unsupported dBase version: -11.".

Is it possible to make it work?

Thank you very much.

Error: Cannot create a string longer than 0x1fffffe8 characters

I read a bunch of DBF files successfully using your library (thanks for that!). However, I get an error on certain files

(node:79490) UnhandledPromiseRejectionWarning: Error: Cannot create a string longer than 0x1fffffe8 characters
    at Object.slice (buffer.js:616:37)
    at Buffer.toString (buffer.js:804:14)
    at SBCSDecoder.write (/Users/MyUser/Code/ProjectName/node_modules/iconv-lite/encodings/sbcs-codec.js:68:19)
    at Object.decode (/Users/MyUser/Code/ProjectName/node_modules/iconv-lite/lib/index.js:42:23)
    at readRecordsFromDBF (/Users/MyUser/Code/ProjectName/node_modules/dbffile/dist/dbf-file.js:307:52)

This is more info about the old file (from a 1991 MS DOS application) using the python dbf package:

Table:         Test.DBF
Type:          dBase III Plus
Codepage:      ascii (plain ol' ascii)
Status:        DbfStatus.CLOSED
Last updated:  2020-05-23
Record count:  240
Field count:   18
Record length: 166
--Fields--
    0) nr N(4,0)
    1) klantnr N(4,0)
    2) faktype C(1)
    3) datum D
    4) tekst M
    5) netto N(7,0)
    6) btw N(7,0)
    7) totaal N(7,0)
    8) medecon N(7,0)
    9) netto1 N(7,0)
    10) netto2 N(7,0)
    11) netto3 N(7,0)
    12) netto4 N(7,0)
    13) netto5 N(7,0)
    14) uitvoer N(7,0)
    15) uitvoerneg N(7,0)
    16) betkode C(1)
    17) diversen C(60)

(I can send it if needed, 40kB).

I used this snippet of code to reproduce the error:

import { DBFFile } from "dbffile";

(async function test() {
  const filePath = "Test.DBF";
  const dbf = await DBFFile.open(filePath, { encoding: "ascii" });

  const records = await dbf.readRecords(10);
})();

Also tried no encoding, and CP850 encoding and some other options, but not sure if it's encoding related? Providing no encoding at all (default latin) worked for the other files of the same database.

Can I update a specific row?

I want to be able to update a specific row.

If updating is for some reason impossible, how about every time we update a row, we just replace all file content of with new updated content? we have the descriptors, we have everything in memory, we just delete the old and create a new with all existing records and the updated ones. Can this feature be built in to the library?

Also what is the file limit this library can handle. Does it just depend on my devices resources?

Support for concurrent read/ writes

Hi thank you for the great work!

I'm fairly new to DBF files, and unfortunately I can't find much documentation online about DBF specifications.

Does this package (or DBF files themselves) come in support for concurrent read/ writes?

For context, I'm currently working on an API integration for a library system where they use DBF files on a network drive as their database. When reading the DBF files, will it cause issues to the existing terminals (responsible for checking in and out books)?

Thank you once again.

Add support to edit a record

Hi @yortus , I have an use case in which I need to edit a record, do you think that this could be useful for the library?

In a positive case, I'm thinking that maybe we could return an array of some sort of records objects that could have their own offset. This way we can edit an object and then just call .update() over it.
f.i.

const dbf = DBFHandlerFile('dbf-path'); // please note that this is a different class
const records = dbf.getAll(); // this cold return an array of DBFRecordHandler

console.log(records[0]); // prints { field1: 10, field2: 'text-field' }
records[0].update('field1', 20); // this could throw an exception if the field doesn't exist
records[0].save(); // save the record

What do you think about? (sorry if this looks like a lot of work)

Error on read large file

Hi, this error is happening when i try to read a large file (36k records)

{ [Error: EINVAL: invalid argument, read] errno: -4071, code: 'EINVAL', syscall: 'read' }

i cant find anything on google :(

Error: Type 'F' is not supported

I have a DBF that has some float data in it, getting "Error: Type 'F' is not supported" when reading the data. I was curious if float data is on the roadmap of supported file types? I looked at the code and it looks as if I could add another case statement for it, but I am not familiar with any of the gotchas I may run into during the process? Can you help? @yortus

Add support for Visual FoxPro tables & DateTime fields

Hi! thanks for this nice library. I've wanted to ask, if is possible to add support to Visual FoxPro tables and DateTime fields. Currently I'm working on migrating some local VFP data to a web server. I'm working on it, can I make a PR for your consideration when is done?

Thanks in advanced!

Thai Character Support

Hi, greeting from Thailand :)

Can you please guide me to how to make this work with thai characters?
Everything else works perfect though.

Reading a very large file

Hi @yortus

I was having trouble reading for a large file.
dbf.readRecords(dbf.recordCount) results in a out of memory exception,
could you suggest a way to iterate through the whole file in small chunks

Invalid DBF: Incorrect record length problem

Invalid DBF: Incorrect record length

Can I know what this error message mean?

I guess difference between length attributes and record length or another.
But I couldn't figure out, so can I get answer?

Show the fieldName when validation failed

Would help alot if the validation can show which field failed validation.

See below a small change I made to the code.

function validateRecord(fields, record) {
    for (let i = 0; i < fields.length; ++i) {
        let name = fields[i].name, type = fields[i].type;
        let value = record[name];
        // Always allow null values
        if (value === null || typeof value === 'undefined')
            continue;
        // Perform type-specific checks
        if (type === 'C') {
            if (typeof value !== 'string')
                throw new Error('Expected a string for ' + name);
            if (value.length > 255)
                throw new Error('Text is too long (maximum length is 255 chars) for ' + name);
        }
        else if (type === 'N' || type === 'F' || type === 'I') {
            if (typeof value !== 'number')
                throw new Error('Expected a number for ' + name + ' but has ', value);
        }
        else if (type === 'D') {
            if (!(value instanceof Date))
                throw new Error('Expected a date for ' + name);
        }
    }
}

Problems to add new records using dbf.appendRecords(records)

Hi guys, I'ave problema to insert a new register to DBF file, mi error is the next:

Error: SALESDATE: expected a date

my code:
let records = [
{
CID: 'JN0LZGRL',
CM: '01',
SALESDATE: '15/03/23',
CCOD_DOC: '07',
CSER: '00B070',
CNUM: '0000000000241',
}
]

Help me pls

Out of bounds error when reading records from VFPwMemo

Hello,
i am working on a tool that uses this library to open DBF files.
I have a dbf file with its fpt file in the same directory. The DBFFile.open works, the object is created:
image
As you can see, it correctly linked the files together and interpreted the field types.
However, when i attempt to read records from it, apparently the function accesses wrong memory blocks:
image
I think it refers to this part of the code:
image

Is it a bug, or am I doing something wrong?
Thanks for answers and help in advance <3 this library has been a life-saver

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.